Table of Contents

Quick Links

HPE Apollo 2000 System User Guide
Abstract
This document is for the person who installs, administers, and troubleshoots servers and storage
systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer
equipment and trained in recognizing hazards in products with hazardous energy levels.
Part Number: 797871-401
Published: May 2017
Edition: 11
Table of Contents
loading

Summary of Contents for HP APOLLO 2000

  • Page 1 HPE Apollo 2000 System User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.
  • Page 2 © 2014, 2017 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
  • Page 3: Table Of Contents

    Contents HPE Apollo 2000 System................8 Introduction............................8 Planning the installation................9 Safety and regulatory compliance....................... 9 Product QuickSpecs..........................9 Determine power and cooling configurations..................9 Power requirements........................9 HPE Apollo Platform Manager....................9 Hot-plug power supply calculations..................9 Server warnings and cautions......................10 Space and airflow requirements......................
  • Page 4 Remove the RCM module......................... 42 Remove the power supply......................... 43 Remove the security bezel........................ 43 Removing the drive..........................44 Remove the chassis access panel....................45 Install the chassis access panel......................46 Remove the chassis from the rack....................47 Remove the rear I/O blank........................ 48 Install the rear I/O blank........................
  • Page 5 Installing the processor and heatsink options..................152 Installing the dedicated iLO management port module option............156 Enabling the dedicated iLO management module..............157 HP Trusted Platform Module option....................158 Installing the Trusted Platform Module board............... 158 Retaining the recovery key/password...................160 Enabling the Trusted Platform Module................. 160...
  • Page 6 Erase Utility...........................178 Scripting Toolkit for Windows and Linux................178 Service Pack for ProLiant..................... 178 Service Pack for ProLiant........................ 179 HP Smart Update Manager....................179 UEFI System Utilities........................179 Using UEFI System Utilities....................179 Flexible boot control......................180 Restoring and customizing configuration settings..............180 Secure Boot configuration....................
  • Page 7 Updating firmware or System ROM..................184 FWUPDATE utility......................184 FWUpdate command from within the Embedded UEFI Shell........184 Firmware Update application in the UEFI System Utilities.........185 Online Flash components..................185 Drivers..........................185 Software and firmware......................186 Operating System Version Support..................186 Version control........................186 Operating systems and virtualization software support for ProLiant servers......186 HPE Technology Service Portfolio..................
  • Page 8: Hpe Apollo 2000 System

    HPE Apollo 2000 System Introduction The HPE Apollo 2000 System consists of a chassis and nodes. There are three chassis options with different storage configurations. To ensure proper thermal cooling, the four server tray slots on the chassis must be populated with server nodes or node blanks.
  • Page 9: Planning The Installation

    Planning the installation Safety and regulatory compliance For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise website (http://www.hpe.com/support/Safety-Compliance-EnterpriseProducts). Product QuickSpecs For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website.
  • Page 10: Server Warnings And Cautions

    Server warnings and cautions WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling. • Remove all installed components from the chassis before installing or moving the chassis. •...
  • Page 11: Temperature Requirements

    When vertical space in the rack is not filled by a server or rack component, the gaps between the components cause changes in airflow through the rack and across the servers. Cover all gaps with blanking panels to maintain proper airflow. CAUTION: Always use blanking panels to fill empty vertical spaces in the rack.
  • Page 12: Rack Warnings

    Rack warnings WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: • The leveling jacks are extended to the floor. • The full weight of the rack rests on the leveling jacks. • The stabilizing feet are attached to the rack if it is a single-rack installation.
  • Page 13: Component Identification

    Component identification Chassis front panel components • HPE Apollo r2200 Chassis Item Description Left bezel ear Low-profile LFF hot-plug drives Right bezel ear Chassis serial label pull tab • HPE Apollo r2600 Chassis Item Description Left bezel ear SFF SmartDrives Right bezel ear Chassis serial label pull tab Non-removable bezel blank...
  • Page 14: Chassis Front Panel Leds And Buttons

    Item Description Left bezel ear SFF SmartDrives Right bezel ear Chassis serial label pull tab Non-removable bezel blank Chassis front panel LEDs and buttons Item Description Status Power On/Standby button and Solid green = System on system power LED (Node 1) Flashing green = Performing power on sequence Solid amber = System in standby Off = No power present...
  • Page 15 Item Description Status Health LED (Node 2) Solid green = Normal Flashing amber = System degraded Flashing red = System critical Health LED (Node 1) Solid green = Normal Flashing amber = System degraded Flashing red = System critical Health LED (Node 3) Solid green = Normal Flashing amber = System degraded Flashing red = System critical...
  • Page 16: Chassis Rear Panel Components

    Chassis rear panel components Four 1U nodes Item Description Node 4 Node 3 RCM module (optional) Power supply 2 Power supply 1 Node 2 Node 1 Two 2U nodes Item Description Node 3 RCM module (optional) Power supply 2 Power supply 1 Node 1 Chassis rear panel components...
  • Page 17: Chassis Rear Panel Leds

    Chassis rear panel LEDs Item Description Status Power supply 2 LED Solid green = Normal Off = One or more of the following conditions exists: • Power is unavailable • Power supply failed • Power supply is in standby mode •...
  • Page 18: Node Rear Panel Leds And Buttons

    Item Description Node serial number and iLO label pull tab SUV connector USB 3.0 connector Dedicated iLO port (optional) NIC connector 1 NIC connector 2 2U node rear panel components Item Description Node serial number and iLO label pull tab SUV connector USB 3.0 connector Dedicated iLO port (optional)
  • Page 19 Item Description Status Power button/LED Solid green = System on Flashing green = Performing power on sequence Solid amber = System in standby Off = No power present UID button/LED Solid blue = Activated ◦ 1 flash per second = Remote management or firmware upgrade in progress ◦...
  • Page 20 Item Description Status iLO link LED Green = Linked to network Off = No network connection NIC link LED Green = Linked to network Off = No network connection NIC activity LED Green or flashing green = Network activity Off = No network activity When the LEDs described in this table flash simultaneously, a power fault has occurred.
  • Page 21: Power Fault Leds

    Item Description Status Power button/LED Solid green = System on Flashing green = Performing power on sequence Solid amber = System in standby Off = No power present UID button/LED Solid blue = Activated ◦ 1 flash per second = Remote management or firmware upgrade in progress ◦...
  • Page 22: System Board Components

    Subsystem LED behavior System board 1 flash Processor 2 flashes Memory 3 flashes Riser board PCIe slots 4 flashes FlexibleLOM 5 flashes Removable HPE Flexible Smart Array 6 flashes controller/Smart SAS HBA controller System board PCIe slots 7 flashes Power backplane or storage backplane 8 flashes Power supply 9 flashes...
  • Page 23: System Maintenance Switch

    Item Description Dedicated iLO port connector NMI header PCIe x16 riser board connector* microSD slot System battery M.2 SSD riser connector TPM connector Processor 1 Processor 2 For more information on the riser board slots supported by the onboard PCI riser connectors, see PCIe riser board slot definitions.
  • Page 24: Nmi Functionality

    CAUTION: Clearing CMOS, NVRAM, or both deletes configuration information. Be sure to configure the node properly to prevent data loss. IMPORTANT: Before using the S7 switch to change to Legacy BIOS Boot Mode, be sure the HPE Dynamic Smart Array B140i Controller is disabled. Do not use the B140i controller when the node is in Legacy BIOS Boot Mode.
  • Page 25: Fan Locations

    Fan locations Drive bay numbering CAUTION: To prevent improper cooling and thermal damage, do not operate the chassis unless all bays are populated with a component or a blank. NOTE: A SATA or mini-SAS cable must be installed in a node for the node to correspond to drives in the chassis.
  • Page 26: Hpe Apollo R2600 Chassis Drive Bay Numbering

    HPE Apollo r2600 Chassis drive bay numbering One 1U node corresponds to a maximum of six SFF SmartDrives. • Node 1 corresponds to drive bays 1-1 through 1-6. • Node 2 corresponds to drive bays 2-1 through 2-6. • Node 3 corresponds to drive bays 3-1 through 3-6. •...
  • Page 27: Hpe Apollo R2800 Chassis Drive Bay Numbering

    HPE Apollo r2800 Chassis drive bay numbering IMPORTANT: The HPE Apollo r2800 Chassis does not support nodes using the HPE Dynamic Smart Array B140i Controller or the HPE P840 Smart Array Controller. Hewlett Packard Enterprise recommends installing an HPE H240 Host Bus Adapter or HPE P440 Smart Array Controller. For information on drive bay mapping in the HPE Apollo r2800 Chassis and the factory default configuration, see "Drive bay mapping for the HPE Apollo r2800 Chassis."...
  • Page 28: Hot-Plug Drive Led Definitions

    Bay 10 Hot-plug drive LED definitions SmartDrive hot-plug drive definitions Item Status Definition Locate Solid blue The drive is being identified by a host application. Flashing blue The drive carrier firmware is being updated or requires an update. Activity ring Rotating green Drive activity.
  • Page 29: Low-Profile Lff Hot-Plug Drive Led Definitions

    Item Status Definition Drive status Solid green The drive is a member of one or more logical drives. Flashing green The drive is rebuilding or performing a RAID migration, strip size migration, capacity expansion, or logical drive extension, or is erasing. Flashing amber/green The drive is a member of one or more logical drives and predicts...
  • Page 30 Online/Activity LED (green) Fault/UID LED (amber/blue) Definition On, off, or flashing Alternating amber and blue One or more of the following conditions exist: • The drive has failed. • A predictive failure alert has been received for this drive. • The drive has been selected by a management application.
  • Page 31: Accelerator Numbering

    Online/Activity LED (green) Fault/UID LED (amber/blue) Definition Solid amber A critical fault condition has been identified for this drive and the controller has placed it offline. Replace the drive as soon as possible. Flashing amber A predictive failure alert has been received for this drive.
  • Page 32: Rcm Module Components

    Item Description Accelerator 1 Accelerator 2 For more information, see "Accelerator options." RCM module components Item Description iLO connector HPE APM 2.0 connector iLO connector For more information, see "Installing the RCM module option." RCM module components...
  • Page 33: Rcm Module Leds

    RCM module LEDs Item Description iLO activity LED Green or flashing green = Network activity Off = No network activity iLO link LED Green = Linked to network Off = No network connection iLO link LED Green = Linked to network Off = No network connection iLO activity LED Green or flashing green = Network activity Off = No network activity...
  • Page 34 Form factor Slot number Slot description Storage controller or low-profile PCIe3 x16 (16, 8, 4, 1) for PCIe NIC card Processor 1 For more information on installing a storage controller, see "Controller options." • Single-slot 1U right PCI riser cage assembly for Processor 2 (PN 798182-B21) Form factor Slot number Slot description...
  • Page 35 For more information on installing a storage controller, see "Controller options." • Single-slot 1U right PCI riser cage assembly for Processor 1 (PN 819939-B21) Form factor Slot number Slot description Storage controller or low-profile PCIe3 x16 (16, 8, 4, 1) for PCIe NIC card Processor 1 For more information on installing a storage controller, see "Controller options."...
  • Page 36 Form factor Slot number Slot description Storage controller or low-profile PCIe3x16 (16, 8, 4, 1) for PCIe NIC card Processor 1 For more information on installing a storage controller, see "Controller options." • FlexibleLOM 2U node riser cage assembly (PN 798184-B21) Item Form factor Slot number...
  • Page 37 For more information on installing an accelerator, see "Accelerator options." • Three-slot 11OS PCI riser cage assembly (PN 798186-B21) and Three-slot Enhanced 11OS PN PCI riser cage assembly (852767-B21) Item Form factor Slot number Slot description Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 1 Storage controller or PCIe3 x16 (16, 8, 4, 1)
  • Page 38 Item Form factor Slot number Slot description Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 Storage controller or PCIe3 x16 (8, 4, 1) for low-profile PCIe NIC Processor 2 card Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 For more information on installing a storage controller, see "Controller options."...
  • Page 39 Item Form factor Slot number Slot description Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 Storage controller or PCIe3 x16 (8, 4, 1) for low-profile PCIe NIC Processor 2 card Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 For more information on installing a storage controller, see "Controller options."...
  • Page 40: Operations

    Operations Power up the nodes About this task The SL/XL Chassis Firmware initiates an automatic power-up sequence when the nodes are installed. If the default setting is changed, use one of the following methods to power up each node: • Use a virtual power button selection through iLO.
  • Page 41 CAUTION: To avoid damage to the node , always support the bottom of the node when removing it from the chassis . CAUTION: To ensure proper thermal cooling, the four server tray slots must be populated with server nodes or node blanks.
  • Page 42: Remove The Rcm Module

    CAUTION: To avoid damage to the device, do not use the removal handle to carry it. 4. Place the node on a flat, level surface. Remove the RCM module Procedure 1. Power down all nodes. 2. Access the product rear panel. 3.
  • Page 43: Remove The Power Supply

    Remove the power supply Prerequisites Before removing the power supply, note the configuration and possible impact to the system. • If two power supplies are installed, removal or failure of one of the power supplies might result in throttling or shut down of the server nodes. For more information, see "Power capping modes." •...
  • Page 44: Removing The Drive

    Removing the drive About this task CAUTION: To prevent improper cooling and thermal damage, do not operate the chassis unless all bays are populated with either a component or blank. Procedure 1. If installed, remove the security bezel. 2. Remove the drive: •...
  • Page 45: Remove The Chassis Access Panel

    Remove the chassis access panel Procedure Power down all nodes. Disconnect all peripheral cables from the nodes and chassis. WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling.
  • Page 46: Install The Chassis Access Panel

    10. Slide the access panel back about 1.5 cm (0.5 in). 11. Lift and remove the access panel. Install the chassis access panel Procedure 1. Install the chassis access panel. a. Place the access panel and align the pin on the chassis, and slide it towards the front of the server. b.
  • Page 47: Remove The Chassis From The Rack

    Remove the chassis from the rack About this task WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling.
  • Page 48: Remove The Rear I/O Blank

    Remove the chassis from the rack. For more information, see the documentation that ships with the rack mounting option. 10. Place the chassis on a flat surface. Remove the rear I/O blank Procedure 1. Power down the node . 2. Disconnect all peripheral cables from the node . 3.
  • Page 49: Install The Rear I/O Blank

    • 2U rear I/O blank CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
  • Page 50 • 1U left rear I/O blank • 2U rear I/O blank Operations...
  • Page 51: Remove The Air Baffle

    2. Install the node into the chassis. 3. Connect all peripheral cables to the node. 4. Power up the node. Remove the air baffle Procedure 1. Power down the node . 2. Disconnect all peripheral cables from the node . 3.
  • Page 52: Install The Air Baffle

    • 2U air baffle Install the air baffle About this task CAUTION: To prevent damage to the server, ensure that all DIMM latches are in closed and locked position before installing the air baffle. Procedure 1. Install the air baffle: a.
  • Page 53: Remove The Bayonet Board Assembly

    2. If a second processor and heatsink are installed, press down on the rear of the air baffle until it snaps into place on the heatsink. 3. Install any removed PCI riser cage assemblies. 4. Install the node into the chassis. 5.
  • Page 54 If a B140i SATA cable is installed, disconnect it from the system board. Remove the bayonet board assembly from the node. • 1U bayonet board assembly • 2U bayonet board assembly 10. If installing a SATA or mini-SAS cable, remove the bayonet board bracket from the bayonet board. •...
  • Page 55: Install The Bayonet Board Assembly

    • 2U bayonet board bracket Install the bayonet board assembly Procedure Connect the SATA or mini-SAS cable to the bayonet board. • 1U bayonet board Install the bayonet board assembly...
  • Page 56 IMPORTANT: If connecting a SATA or Mini-SAS cable to the 2U bayonet board, route the cable under the padding before installing the 2U bayonet board bracket. • 2U bayonet board Install the bayonet board bracket onto the bayonet board. • 1U bayonet board bracket Operations...
  • Page 57 • 2U bayonet board bracket Install the bayonet board assembly into the node: • 1U bayonet board assembly Operations...
  • Page 58: Remove The Pci Riser Cage Assembly

    • 2U bayonet board assembly If any SATA or mini-SAS cables are installed, secure the cables under the thin plastic cover along the side of the node tray. If removed, connect the B140i SATA cable to the system board. If an accelerator power cable was removed, connect it to the bayonet board. Install any removed PCI riser cage assemblies.
  • Page 59: Single-Slot Left Pci Riser Cage Assembly

    CAUTION: To prevent damage to the server or expansion boards, power down the server, and disconnect all power cords before removing or installing the PCI riser cage. Single-slot left PCI riser cage assembly Procedure 1. Power down the node . 2.
  • Page 60: Single-Slot 1U Node Right Pci Riser Cage Assemblies

    CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. Single-slot 1U node right PCI riser cage assemblies About this task NOTE:...
  • Page 61: Flexiblelom 1U Node Riser Cage Assembly

    FlexibleLOM 1U node riser cage assembly Procedure 1. Power down the node . 2. Disconnect all peripheral cables from the node . 3. Remove the node from the chassis . 4. Do one of the following: • Remove the 1U left rear I/O blank. •...
  • Page 62: Flexiblelom 2U Node Riser Cage Assembly

    CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. FlexibleLOM 2U node riser cage assembly Procedure 1.
  • Page 63: Three-Slot Riser Cage Assemblies

    Three-slot riser cage assemblies About this task NOTE: Three-slot riser cage assemblies feature different riser boards. For more information on the riser board slot specifications, see "PCIe riser board slot definitions." Procedure 1. Power down the node . 2. Disconnect all peripheral cables from the node . 3.
  • Page 64: Setup

    • T-10/T-15 Torx screwdriver (to install hardware options) • Flathead screwdriver (to remove the knockout on the dedicated iLO connector opening) • Hardware options Installation overview About this task To set up and install the HPE Apollo 2000 System: Setup...
  • Page 65: Installing Hardware Options

    Procedure Set up and install the rack. For more information, see the documentation that ships with the rack. Prepare the chassis: a. Remove the power supply. b. Remove the nodes. c. Remove all drives. NOTE: If planning to install the HPE Smart Storage Battery or redundant fan option, install these options into the chassis before installing the chassis into the rack.
  • Page 66 WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling. • Remove all installed components from the chassis before installing or moving the chassis. •...
  • Page 67: Chassis Component Installation

    Chassis component installation Installing a node into the chassis CAUTION: To ensure proper thermal cooling, the four server tray slots must be populated with server nodes or node blanks. • 1U node • 2U node Chassis component installation...
  • Page 68: Installing A Drive

    Installing a drive CAUTION: To ensure proper thermal cooling, do no operate the chassis unless all bays are populated with either a component or a blank. 1. Remove the drive blank. 2. Install the drives. Installing the power supplies CAUTION: Do not mix power supplies with different efficiency and wattage in the chassis.
  • Page 69: Powering Up The Chassis

    3. If planning to install a RCM module, install it now. 4. Connect all power cords and secure them with the strain relief strap. Powering up the chassis Connect the AC or DC power cables, depending on the power configuration. When the circuit breakers are powered, the chassis and Advanced Power Manager have power.
  • Page 70: Installing The Operating System

    1. Press the Power On/Standby button. 2. During the initial boot: • To modify the server configuration ROM default settings, press the F9 key in the ProLiant POST screen to enter the UEFI System Utilities screen. By default, the System Utilities menus are in the English language.
  • Page 71: Power Capping Modes

    With APM , the enclosure-level power capping feature can be expanded without the need to use the PPIC.EXE utility. A global power cap can be applied to all enclosures with one APM command, or different caps can be applied to user-defined groups by using flexible zones within the same rack. Power capping modes The following Power Management modes are standard and are configurable in the power management controller:...
  • Page 72: Setting The Chassis Power Cap Mode With The Ppic Utility

    Setting the chassis power cap mode with the PPIC utility 1. Download and install the ProLiant Power Interface Control Utility from the Hewlett Packard Enterprise website. 2. Log in to the node , and then run the PPIC utility. 3. To set the power capacity mode, perform one of the following steps: •...
  • Page 73: Factory Default Configuration

    The server nodes may be remotely restarted through the iLO remote interface, or may be locally restarted by pressing the power button for each node. This feature requires the following minimum firmware versions: • Apollo 2000 System Chassis firmware version 1.4.0 or later • Storage Expander firmware version 1.0 or later •...
  • Page 74: Registering The Server

    5. Power down the nodes. IMPORTANT: All nodes must remain powered off for at least 5 seconds after executing the configuration changes. 6. Power up the nodes. Registering the server To experience quicker service and more efficient support, register the product at the Hewlett Packard Enterprise Product Registration website.
  • Page 75: Hardware Options Installation

    Hardware options installation Introduction If more than one option is being installed, read the installation instructions for all the hardware options and identify similar steps to streamline the installation process. WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them.
  • Page 76: Removing A Drive Blank

    Removing a drive blank Procedure 1. If installed, remove the security bezel. 2. Remove the drive blank. Installing a hot-plug drive About this task The chassis can support up to 12 drives in an LFF configuration and up to 24 drives in an SFF configuration. WARNING: To reduce the risk of injury from electric shock, do not install more than one drive carrier at a time.
  • Page 77: Installing The Node Blank

    4. Install the drive: • SFF SmartDrive • Low-profile LFF hot-plug drive 5. Determine the status of the drive from the drive LED definitions. 6. If removed, install the security bezel. For information on drive bay mapping in the HPE Apollo r2800 Chassis and the factory default configuration, see "Drive bay mapping for the HPE Apollo r2800 Chassis."...
  • Page 78: Installing The Rcm Module Option

    Procedure 1. Install the node blank into the left side of the server chassis. 2. Install the node blank into the right side of the server chassis. Installing the RCM module option Prerequisites Observe the following rules and limitations when installing an RCM module: •...
  • Page 79 • Use either the APM port or an iLO port to connect to a network. Having both ports connected at the same time results in a loopback condition. • Do not connect both iLO ports to the network at the same time. Only one iLO port can be connected to the network, while the other iLO port can be used only as a connection to a second enclosure.
  • Page 80 If two power supplies are installed, do the following: a. Install the RCM module onto the bottom power supply. b. Release the strain relief strap on the top power supply handle. c. Secure both power cords in the strain relief strap on the top power supply handle. If using the RCM module iLO ports to connect the chassis to a network, connect all cables to the RCM module and the network.
  • Page 81: Installing The Rcm 2.0 To 1.0 Adapter Cable

    NOTE: Arrow indicates connection to the network. If an HPE APM is installed, connect the cables to the RCM module, the APM, and the network. Reconnect all power: a. Connect each power cord to the power source. b. Connect the power cord to the chassis. 10.
  • Page 82: Redundant Fan Option

    5. Connect the cables to the RCM module, the APM, and the network. 6. Reconnect all power: a. Connect each power cord to the power source. b. Connect the power cord to the chassis. 7. Power up the nodes. Redundant fan option Fan population guidelines To provide sufficient airflow to the system if a fan fails, the server supports redundant fans.
  • Page 83: Installing The Fan Option

    Configur Fan bay 1 Fan bay 2 Fan bay 3 Fan bay 4 Fan bay 5 Fan bay 6 Fan bay 7 Fan bay 8 ation Non- Empty Empty Empty Empty redundant Redundan • In a redundant fan mode: ◦ If one fan fails, the system continues to operate without redundancy.
  • Page 84 11. Connect the fan cables to the power connectors. 12. Install the access panel. 13. Install the chassis into the rack. 14. If removed, install the security bezel. 15. Install all nodes, drives and power supplies. 16. If removed, install the RCM module. 17.
  • Page 85: Memory Options

    Memory options IMPORTANT: This node does not support mixing LRDIMMs or RDIMMs. Attempting to mix any combination of these DIMMs can cause the node to halt during BIOS initialization. The memory subsystem in this node can support LRDIMMs and RDIMMs: •...
  • Page 86 Type Rank Capacity (GB) Native speed Voltage (MT/s) RDIMM Dual 2400 RDIMM Dual 2400 LRDIMM Dual 2400 LRDIMM Quad 2400 LRDIMM Octal 2400 Populated DIMM speed (MT/s) Operating memory speed is a function of rated DIMM speed, the number of DIMMs installed per channel, processor model, and the speed selected in the BIOS/Platform Configuration (RBSU) of the UEFI System Utilities.
  • Page 87: Smartmemory

    DIMM type DIMM rank Capacity (GB) Maximum capacity Maximum capacity for one processor for two processors (GB) (GB) RDIMM Dual-rank LRDIMM Dual-rank RDIMM Dual-rank LRDIMM Quad-rank Maximum memory capacity - Intel Xeon E5-2600 v4 processor installed DIMM type DIMM rank Capacity (GB) Maximum capacity Maximum capacity...
  • Page 88: Single-, Dual-, And Quad-Rank Dimms

    Single-, dual-, and quad-rank DIMMs To understand and configure memory protection modes properly, an understanding of single-, dual-, and quad-rank DIMMs is helpful. Some DIMM configuration requirements are based on these classifications. A single-rank DIMM has one set of memory chips that is accessed while writing to or reading from the memory.
  • Page 89: Memory Configurations

    For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website. Memory configurations To optimize node availability, the node supports the following AMP modes: • Advanced ECC—Provides up to 4-bit error correction and enhanced performance over Lockstep mode. This mode is the default option for this node.
  • Page 90: Advanced Ecc Population Guidelines

    • DIMMs should be populated starting farthest from the processor on each channel. • For DIMM spare replacement, install the DIMMs per slot number as instructed by the system software. For more information about node memory, see the Hewlett Packard Enterprise website. Advanced ECC population guidelines For Advanced ECC mode configurations, observe the following guidelines: •...
  • Page 91: Installing Sata And Mini-Sas Cable Options

    Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis. Place the node on a flat, level surface. If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly. If installed in a 2U node, remove the three-slot riser cage assembly.
  • Page 92 About this task For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website. Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis. Place the node on a flat, level surface.
  • Page 93: Pci Riser Cage Assembly Options

    Install the bayonet board bracket and bayonet board assembly. 10. If installing a host bus adapter or Smart Array controller, install it into the riser cage. 11. Do one of the following: • Connect the B140i SATA cable to the system board. •...
  • Page 94 If you are installing an expansion board, remove the PCI blank. Install any expansion board options. Connect all necessary internal cabling to the expansion board. For more information on these cabling requirements, see the documentation that ships with that option. In a 1U node, install the single-slot left PCI riser cage assembly and then secure it with three T-10 screws.
  • Page 95: Single-Slot 1U Node Right Pci Riser Cage Assembly Options

    b. Install the three-slot riser cage assembly and then secure it with six T-10 screws. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
  • Page 96 Install any expansion board options. Connect all necessary internal cabling to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. 10. Install the single-slot 1U node right PCI riser cage assembly and then secure it with four T-10 screws. 11.
  • Page 97: Single-Slot 2U Node Pci Riser Cage Assembly Option

    13. Connect all peripheral cables to the node. 14. Power up the node. Single-slot 2U node PCI riser cage assembly option Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis. Place the node on a flat, level surface. Remove the 2U rear I/O blank.
  • Page 98: Flexiblelom 1U Node Riser Cage Assembly Option

    b. Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
  • Page 99 Install the FlexibleLOM adapter. Install the FlexibleLOM riser cage assembly. Hardware options installation...
  • Page 100: Flexiblelom 2U Node Riser Cage Assembly Option

    10. Do one of the following: • Install the 1U left rear I/O blank. • Install the single-slot left PCI riser cage assembly. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
  • Page 101 Install the FlexibleLOM adapter. Do the following: a. Install the single-slot 2U node PCI riser cage assembly and secure it with two T-10 screws. b. Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws. Hardware options installation...
  • Page 102: Three-Slot Riser Cage Assembly Options

    CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not power up.
  • Page 103 If installing an expansion board, do the following: a. Remove the riser cage bracket. b. Select the appropriate PCIe slot and remove any PCI blanks. Hardware options installation...
  • Page 104 Install any expansion board options. Connect all necessary internal cables to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. 10. Install the riser cage bracket. 11. Install the three-slot riser cage assembly and then secure it with six T-10 screws. Hardware options installation...
  • Page 105: Expansion Board Options

    CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. IMPORTANT: If the PCIe riser cage assembly is not seated properly, then the server does not power up.
  • Page 106: Installing The Expansion Board

    Installing the expansion board About this task Determine if there are thermal requirements for the option. For a list of options that have thermal limitations, see "Thermal limitations." To install the component: Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis.
  • Page 107 • Slot 2 of a single-slot 1U right PCI riser cage assembly • Slot 2 of the FlexibleLOM 2U riser cage assembly Hardware options installation...
  • Page 108: Controller Options

    • Slot 2 of a three-slot riser cage assembly Connect all necessary internal cabling to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. 10. Install any removed PCI riser cage assemblies. 11.
  • Page 109: Storage Controller Installation Guidelines

    The node supports FBWC. FBWC consists of a cache module and a Smart Storage Battery Pack. The DDR cache module buffers and stores data being written by an integrated Gen9 P-series Smart Array Controller. CAUTION: The cache module connector does not use the industry-standard DDR3 mini-DIMMs. Do not use the controller with cache modules designed for other controller models, because the controller can malfunction and you can lose data.
  • Page 110 WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling. • Remove all installed components from the chassis before installing or moving the chassis. •...
  • Page 111 12. Do the following: a. Connect the Smart Storage Battery cable to the power distribution board. b. Install the Smart Storage Battery holder into the chassis. IMPORTANT: Ensure that the battery cable is connected to the correct connector. For detailed cabling information, see "HPE Smart Storage Battery cabling."...
  • Page 112: Installing The Storage Controller And Fbwc Module Options

    Installing the storage controller and FBWC module options About this task IMPORTANT: If planning to install a Smart Storage Battery, install it in the chassis before installing the storage controller and FBWC module in the node. Procedure Power down the node. Disconnect all peripheral cables from the node.
  • Page 113 12. Install the cache module on the storage controller. 13. If you installed a cache module on the storage controller, connect the cache module backup power cable to the riser board. 14. Connect all necessary internal cables to the storage controller. For internal cabling information, see "SATA and Mini-SAS cabling."...
  • Page 114 • Slot 1 of the single-slot 2U node PCI riser cage assembly • Slot 2 of single-slot 1U right PCI riser cage assembly Hardware options installation...
  • Page 115 • Slot 2 of the FlexibleLOM 2U riser cage assembly • Slot 2 of a three-slot riser cage assembly Hardware options installation...
  • Page 116 16. Connect the SATA or mini-SAS cable to the bayonet board. • 1U bayonet board IMPORTANT: If connecting a SATA or Mini-SAS cable to the 2U bayonet board, route the cable under the padding before installing the 2U bayonet board bracket. •...
  • Page 117: Accelerator Options

    17. Install the bayonet board bracket and bayonet board assembly. 18. Route and secure the cable under the thin plastic cover. 19. Install any removed PCI riser cage assemblies.. 20. Install the node into the chassis. 21. Connect all peripheral cables to the node. 22.
  • Page 118: Supported Riser Cage Assemblies And Accelerator Power Cables

    ◦ If installing a single NVIDIA GRID K2 RAF GPU, NVIDIA Tesla K80 GPU, NVIDIA Tesla M60 GPU, NVIDIA Tesla M40 GPU, NVIDIA Tesla P40 GPU, or NVIDIA Tesla P100 GPU, install it into slot 3 and leave slot 4 empty. ◦...
  • Page 119 Dual accelerator options Power cable 2-pin adapter cables Both 825634-001 and 825635-001 Not supported • Intel Xeon Phi Coprocessor are required 5110P • AMD FirePro S9150 GPU • NVIDIA Tesla K40 GPU Accelerator Both 825636-001 and 825637-001 Not supported • NVIDIA Tesla M40 GPU are required Accelerator...
  • Page 120: Installing One Accelerator In A Flexiblelom 2U Node Riser Cage Assembly

    Dual accelerator options Power cable 2-pin adapter cables Both 825634-001 and 825635-001 Not supported • AMD FirePro S9150 GPU are required Dual accelerator configurations only. If installing this accelerator model, populate both slot 3 and slot 4 with accelerators to ensure proper thermal cooling. Three-slot GPU-direct with re-timer PCI riser cage assembly (PN 827353-B21) Dual accelerator options Power cable...
  • Page 121 Connect the single accelerator power cable to the connector on the riser board. If installing a NVIDIA Tesla K40 GPU, install the front support bracket for Accelerator 1 with four M2.5 screws. Hardware options installation...
  • Page 122 10. Install the accelerator into the PCI riser cage assembly. • NVIDIA Tesla K40 GPU • Intel Xeon Phi Coprocessor 5110P Hardware options installation...
  • Page 123 • AMD FirePro S9150 GPU • NVIDIA Quadro K4200 GPU or NVIDIA Quadro M4000 GPU Hardware options installation...
  • Page 124: Installing Nvidia Grid K2 Raf Gpus In A Three-Slot Riser Cage Assembly

    IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector. 11. Connect the power cable to the accelerator. For more information, see "Accelerator cabling." 12.
  • Page 125 Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node. Remove the existing rear support bracket from Accelerator 1. 10. Install the rear support bracket for Accelerator 1. Hardware options installation...
  • Page 126 11. Install Accelerator 1 into slot 3. 12. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling." NOTE: If installing a single NVIDIA GRID K2 RAF GPU, skip to step 17. 13. Remove the existing front and rear support brackets from Accelerator 2. 14.
  • Page 127 15. Install Accelerator 2 into slot 4. 16. Connect the Accelerator 2 power cable to Accelerator 2. 17. Connect the Accelerator 1 power cable to the Accelerator 2 power cable. IMPORTANT: Each NVIDIA GRID K2 RAF GPU requires a 2-pin adapter cable. 18.
  • Page 128: Installing Amd Firepro S7150 And S9150 Gpus In A Three-Slot Riser Cage Assembly

    19. Install the riser cage bracket. 20. Connect the power cable to the bayonet board. For more information, see "Accelerator cabling." 21. Install the three-slot riser cage assembly. 22. Install the node into the chassis. 23. Connect all peripheral cables to the node. 24.
  • Page 129 Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis. Place the node on a flat, level surface. Remove the three-slot riser cage assembly. Remove the riser cage bracket. Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node.
  • Page 130 • AMD FirePro S9150 GPU 11. Install Accelerator 1 into slot 3. • AMD FirePro S7150 GPU Hardware options installation...
  • Page 131 • AMD FirePro S9150 GPU 12. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling." 13. Remove the existing rear support bracket from Accelerator 2. 14. Remove the cover from Accelerator 2. • AMD FirePro S9150 GPU Hardware options installation...
  • Page 132 • AMD FirePro S7150 GPU 15. If installed, remove the existing front support bracket from Accelerator 2. 16. Install the front support bracket onto Accelerator 2. Hardware options installation...
  • Page 133 • AMD FirePro S7150 GPU • AMD FirePro S9150 17. Reinstall the accelerator cover. 18. Install the rear support bracket. AMD FirePro S7150 Hardware options installation...
  • Page 134 AMD FirePro S9150 19. Install Accelerator 2 into slot 4. • AMD FirePro S7150 GPU Hardware options installation...
  • Page 135 • AMD FirePro S9150 GPU 20. Connect the Accelerator 2 power cable to Accelerator 2. 21. Connect the Accelerator 1 power cable to the Accelerator 2 power cable. 22. Install the riser cage bracket. Hardware options installation...
  • Page 136: Installing Intel Xeon Phi 5110P Coprocessors In A Three-Slot Riser Cage Assembly

    23. Connect the power cable to the bayonet board. For more information, see "Accelerator cabling." 24. Install the three-slot riser cage assembly. 25. Install the node into the chassis. 26. Connect all peripheral cables to the node. 27. Power up the node. Installing Intel Xeon Phi 5110P Coprocessors in a three-slot riser cage assembly About this task...
  • Page 137 Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node. Remove the existing rear support bracket from Accelerator 1. 10. Install the rear support bracket for Accelerator 1. Hardware options installation...
  • Page 138 11. Install Accelerator 1 into slot 3. IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, Connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector. 12. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling."...
  • Page 139 15. Install Accelerator 2 into slot 4. IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector. 16. Connect the Accelerator 2 power cable to Accelerator 2. 17.
  • Page 140: Installing Nvidia Tesla K80, K40, M60, And M40 Gpus In A Three-Slot Riser Cage Assembly

    19. Connect the power cable to the bayonet board. For more information, see "Accelerator cabling." 20. Install the three-slot riser cage assembly. 21. Install the node into the chassis. 22. Connect all peripheral cables to the node. 23. Power up the node. Installing NVIDIA Tesla K80, K40, M60, and M40 GPUs in a three-slot riser cage assembly About this task...
  • Page 141 Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node. Remove the existing rear support bracket from Accelerator 1. 10. If installing a NVIDIA Tesla K40 GPU, install the front support bracket for Accelerator 1 with four M2.5 screws.
  • Page 142 11. Install the rear support bracket for Accelerator 1. 12. Install Accelerator 1 into slot 3. Hardware options installation...
  • Page 143 13. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling." NOTE: If installing a single NVIDIA Tesla K80, M60, or M40 GPU, skip to step 18. NOTE: Single NVIDIA Tesla K40 GPUs are not supported in a three-slot riser cage assembly. 14.
  • Page 144: Installing Nvidia Tesla P40 And P100 Gpus And Bezel Blanks

    17. Connect the Accelerator 2 power cable to Accelerator 2. 18. Connect the Accelerator 1 power cable to the Accelerator 2 power cable. 19. Install the riser cage bracket. 20. Connect the power cable to the bayonet board. For more information, see "Accelerator cabling." 21.
  • Page 145: Bezel Blank Installation Guidelines For The Hpe Apollo R2200 Chassis And Hpe Apollo R2600 Chassis

    CAUTION: If NVIDIA Tesla P40 GPUs are installed in the server node, and the server node is installed in the HPE Apollo r2200 Chassis, the inlet ambient temperature must be maintained at or below 30°C (86°F). CAUTION: If NVIDIA Tesla P100 GPUs are installed in the server node, and the server node is installed in the HPE Apollo r2600 Chassis, the inlet ambient temperature must be maintained at or below 20°C (68°F).
  • Page 146: Installing A Bezel Blank

    • SFF bezel blanks are not required in the HPE Apollo r2600 Chassis if NVIDIA Tesla P40 GPUs are installed in the server node. • If an NVIDIA Tesla P100 GPU is installed in Node 1, SFF bezel blanks must be installed in drive bays 2-1, 2-2, 2-3, 2-4, 2-5, and 2-6.
  • Page 147: Installing Nvidia Tesla P40 And P100 Gpus In A Three-Slot Riser Cage Assembly

    • HPE Apollo r2200 Chassis with NVIDIA Tesla P40 or P100 GPUs installed in both Node 1 and Node 3. • HPE Apollo r2600 Chassis with NVIDIA Tesla P100 GPUs installed in both Node 1 and Node 3. NOTE: SFF bezel blanks are not required in the HPE Apollo r2600 Chassis if NVIDIA Tesla P40 GPUs are installed in the server node.
  • Page 148 Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node. 10. Remove the existing rear support bracket from Accelerator 1. 11. Install the rear support bracket for Accelerator 1. Hardware options installation...
  • Page 149 12. Install Accelerator 1 into slot 3. 13. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling." NOTE: If installing a single NVIDIA Tesla P40 or P100 GPU, skip to step 18. 14. Remove the existing front and rear support brackets from Accelerator 2. 15.
  • Page 150 16. Install Accelerator 2 into slot 4. 17. Connect the Accelerator 2 power cable to Accelerator 2. 18. Connect the Accelerator 1 power cable to the Accelerator 2 power cable. 19. Install the riser cage bracket. Hardware options installation...
  • Page 151: Installing The M.2 Sata Ssd Enablement Board Option

    20. Connect the power cable to the bayonet board. For more information, see "Accelerator cabling." 21. Install the three-slot riser cage assembly. 22. Install the node into the chassis. 23. Connect all peripheral cables to the node. 24. Power up the node. Installing the M.2 SATA SSD enablement board option About this task The M.2 SATA SSD enablement board can only be installed on the single-slot left PCI riser cage assembly...
  • Page 152: Installing The Processor And Heatsink Options

    • Single-slot 2U node PCI riser cage assembly If removed, install the storage controller. Install any removed PCI riser cage assemblies. 10. Install the node into the chassis. 11. Connect all peripheral cables to the node. 12. Power up the node. Installing the processor and heatsink options Prerequisites Determine if there are thermal requirements for the option.
  • Page 153 Procedure Power down the node. Disconnect all peripheral cables from the node. Remove the node from the chassis. Place the node on a flat, level surface. WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them.
  • Page 154 CAUTION: THE PINS ON THE SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED. To avoid damage to the system board, do not touch the processor or the processor socket contacts. 10. Install the processor. Verify that the processor is fully seated in the processor retaining bracket by visually inspecting the processor installation guides on either side of the processor.
  • Page 155 CAUTION: Close and hold down the processor cover socket while closing the processor locking levers. The levers should close without resistance. Forcing the levers closed can damage the processor and socket, requiring system board replacement. 12. Press and hold the processor retaining bracket in place, and then close each processor locking lever. Press only in the area indicated on the processor retaining bracket.
  • Page 156: Installing The Dedicated Ilo Management Port Module Option

    a. Position the heatsink on the processor backplate. b. Tighten one pair of diagonally opposite screws halfway, and then tighten the other pair of screws. c. Finish the installation by completely tightening the screws in the same sequence. 15. Install the air baffle. 16.
  • Page 157: Enabling The Dedicated Ilo Management Module

    a. Insert a flat screwdriver into the knockout. b. Twist and pull to remove the knockout from the node. Install the dedicated iLO management port card into the node. If removed, install all rear I/O blanks. 10. Install any removed PCI riser cage assemblies. 11.
  • Page 158: Hp Trusted Platform Module Option

    The IP address of the enabled dedicated iLO connector appears on the POST screen on the subsequent boot-up. Access the Network Options screen again to view this IP address for later reference. HP Trusted Platform Module option When installing or replacing TPM, observe the following guidelines: •...
  • Page 159 Remove the node from the chassis. Place the node on a flat, level surface. Remove any installed PCI riser cage assemblies. CAUTION: Any attempt to remove an installed TPM from the system board breaks or disfigures the TPM security rivet. Upon locating a broken or disfigured rivet on an installed TPM, administrators should consider the system compromised and take appropriate measures to ensure the integrity of the system data.
  • Page 160: Retaining The Recovery Key/Password

    OS application TPM settings. For more information on firmware updates and hardware procedures, see the HP Trusted Platform Module Best Practices White Paper on the Hewlett Packard Enterprise Support Center website.
  • Page 161: Cabling

    Cabling Chassis cabling Front I/O cabling Item Description Left front I/O cable Right front I/O cable Drive backplane power cabling HPE Apollo r2600 Chassis Cabling...
  • Page 162 Item Description Power cable for Node 1 and Node 2 Power cable for drives Power cable for Node 3 and Node 4 PDB Pass-through cable HPE Apollo r2200 Chassis Item Description Power cable for Node 1 and Node 2 Power cable for drives Power cable for Node 3 and Node 4 PDB Pass-through cable HPE Apollo r2800 Chassis...
  • Page 163: Rcm 2.0 Cabling

    Item Description Power cable for Node 1 and Node 2 Power cable for drives Power cable for Node 3 and Node 4 PDB Pass-through cable RCM 2.0 cabling Fan power cabling HPE Apollo r2200 Chassis and HPE Apollo r2600 Chassis RCM 2.0 cabling...
  • Page 164 HPE Apollo r2800 Chassis Item Description PDB to left fan cage power cable Storage expander card to right fan cage power cable PDB to storage expander card fan power cable Cabling...
  • Page 165: Fan Module Cabling

    Fan module cabling Item Description Fan 1 cable Fan 2 cable Fan 3 cable Fan 4 cable Fan 5 cable Fan 6 cable Fan 7 cable Fan 8 cable HPE Smart Storage Battery cabling Fan module cabling...
  • Page 166: Node Cabling

    Node cabling SATA and Mini-SAS cabling B140i 1U node SATA cabling B140i 2U node SATA cabling Item Description Connection SATA 1 cable Mini-SAS connector 1 (SATA x4) on the system board to Port 1 on the bayonet board SATA 2 cable Mini-SAS connector 2 (SATA x4) on the system board to Port 2 on the bayonet board Node cabling...
  • Page 167 Mini-SAS H240 1U node cabling Mini-SAS H240 2U node cabling Mini-SAS P440 2U node cabling Mini-SAS P440/P840 node cabling HPE P440 Smart Array controller installed in a 1U node HPE P840 Smart Array controller installed in FlexibleLOM 2U node riser cage assembly Cabling...
  • Page 168: Fbwc Module Cabling

    Item Description Connection Mini-SAS P440/P840 cable Port 1 on P840 Smart Array controller to Port 1 on the bayonet board Mini-SAS P440/P840 cable Port 2 on P840 Smart Array controller to Port 2 on the bayonet board FBWC module cabling The FBWC solution is a separately purchased option.
  • Page 169 HPE P440 Smart Array controller in a single-slot 1U node right PCI riser cage assembly HPE P440 Smart Array controller in a three-slot riser cage assembly Cabling...
  • Page 170: Accelerator Cabling

    HPE P840 Smart Array controller in a FlexibleLOM 2U node riser cage assembly Accelerator cabling Accelerator cabling in the FlexibleLOM 2U node riser cage assembly NVIDIA Quadro K4200 GPU or NVIDIA Quadro M4000 GPU NVIDIA Tesla K40 GPU or AMD FirePro S9150 GPU Accelerator cabling...
  • Page 171: Accelerator Cabling In A Three-Slot Riser Cage Assembly

    NOTE: Depending on the accelerator model purchased, the accelerator and cabling might look slightly different than shown. Intel Xeon Phi Coprocessor 5110P IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector.
  • Page 172 Item Description Accelerator 2 power cable (PN 825635-001) Accelerator 1 power cable (PN 825634-001) Dual NVIDIA Tesla K40 GPUs, NVIDIA GRID K2 Reverse Air Flow GPUs, AMD FirePro S9150 GPUs, or AMD FirePro S7150 GPUs Item Description Accelerator 2 power cable (PN 825635-001) Accelerator 1 power cable (PN 825634-001) Dual Intel Xeon Phi Coprocessor 5110P Cabling...
  • Page 173 Item Description Accelerator 2 power cable (PN 825635-001) Accelerator 1 power cable (PN 825634-001) Single NVIDIA Tesla K80 GPU, NVIDIA Tesla M60 GPU, NVIDIA Tesla M40 GPU, NVIDIA Tesla P40 GPU, or NVIDIA Tesla P100 GPU Item Description Accelerator 2 power cable (PN 825637-001) Accelerator 1 power cable (PN 825636-001) Dual NVIDIA Tesla K80 GPUs, NVIDIA Tesla M60 GPUs, NVIDIA Tesla M40 GPUs, NVIDIA Tesla P40 GPU, or NVIDIA Tesla P100 GPUs...
  • Page 174: 2-Pin Adapter Cables

    Item Description Accelerator 2 power cable (PN 825637-001) Accelerator 1 power cable (PN 825636-001) 2-pin adapter cables Single NVIDIA GRID K2 Reverse Air Flow GPU Dual NVIDIA GRID K2 Reverse Air Flow GPUs 2-pin adapter cables...
  • Page 175: Software And Configuration Utilities

    Online and Offline Erase Utility Offline Scripting Toolkit for Windows and Linux Online Service Pack for ProLiant Online and Offline HP Smart Update Manager Online and Offline HPE UEFI System Utilities Offline HPE Smart Storage Administrator Online and Offline FWUPDATE utility...
  • Page 176: Ilo Restful Api Support

    • Consolidated health and service alerts with precise time stamps • Agentless monitoring that does not affect application performance The Agentless Management Service is available in the SPP, which can be downloaded from the Hewlett Packard Enterprise website. The Active Health System log can be downloaded manually from iLO 4 or Intelligent Provisioning and sent to Hewlett Packard Enterprise.
  • Page 177: Hpe Insight Remote Support Central Connect

    HPE Insight Remote Support central connect When you use the embedded Remote Support functionality with ProLiant Gen8 and later server models and BladeSystem c-Class enclosures, you can register a node or chassis to communicate to Hewlett Packard Enterprise through an Insight Remote Support centralized Hosting Device in your local environment. All configuration and service event information is routed through the Hosting Device.
  • Page 178: Insight Diagnostics Survey Functionality

    The SPP is a comprehensive systems software (drivers and firmware) solution delivered as a single package with major server releases. This solution uses HP SUM as the deployment tool and is tested on all supported ProLiant servers including ProLiant Gen8 and later servers.
  • Page 179: Service Pack For Proliant

    Smart Update: Server Firmware and Driver Updates page HP Smart Update Manager HP SUM is a product used to install and update firmware, drivers, and systems software on ProLiant servers. The HP SUM provides a GUI and a command-line scriptable interface for deployment of systems software for single or one-to-many ProLiant servers and network-based targets, such as iLOs, OAs, and VC Ethernet and Fibre Channel modules.
  • Page 180: Flexible Boot Control

    Action Access System Utilities F9 during server POST Navigate menus Up and Down arrows Select items Enter Save selections Access Help for a highlighted configuration option Scan the QR code on the screen to access online help for the UEFI System Utilities and UEFI Shell.
  • Page 181: Secure Boot Configuration

    You can also configure default settings as necessary, and then save the configuration as the custom default configuration. When the system loads the default settings, it uses the custom default settings instead of the factory defaults. Secure Boot configuration Secure Boot is integrated in the UEFI specification on which the Hewlett Packard Enterprise implementation of UEFI is based.
  • Page 182: Re-Entering The Server Serial Number And Product Id

    use to perform configuration, inventory, and monitoring of a ProLiant server. The iLO RESTful API uses basic HTTPS operations (GET, PUT, POST, DELETE, and PATCH) to submit or return JSON-formatted data with iLO web server. For more information about the iLO RESTful API and the RESTful Interface Tool, see the Hewlett Packard Enterprise website.
  • Page 183: Automatic Server Recovery

    Automatic Server Recovery ASR is a feature that causes the system to restart when a catastrophic operating system error occurs, such as a blue screen, ABEND, or panic. A system fail-safe timer, the ASR timer, starts when the System Management driver, also known as the Health Driver, is loaded. When the operating system is functioning properly, the system periodically resets the timer.
  • Page 184: Safety And Security Benefits

    Access to some updates for ProLiant Servers may require product entitlement when accessed through the Hewlett Packard Enterprise Support Center support portal. Hewlett Packard Enterprise recommends that you have an HP Passport set up with relevant entitlements. For more information, see the Hewlett Packard Enterprise website.
  • Page 185: Firmware Update Application In The Uefi System Utilities

    1. Access the System ROM Flash Binary component for your node from the Hewlett Packard Enterprise Support Center website. When searching for the component, always select OS Independent to locate the binary file. 2. Copy the binary file to a USB media or iLO virtual media. 3.
  • Page 186: Software And Firmware

    If you are installing an Intelligent Provisioning-supported OS, use Intelligent Provisioning and its Configure and Install feature to install the OS and latest supported drivers. If you do not use Intelligent Provisioning to install an OS, drivers for some of the new hardware are required. These drivers, as well as other option drivers, ROM images, and value-add software can be downloaded as part of an SPP.
  • Page 187: Hpe Technology Service Portfolio

    HPE Technology Service Portfolio HPE Technology Services deliver confidence, reduces risk and helps customers realize agility and stability. We help customers succeed through Hybrid IT by simplifying and enriching the on-premise experience, informed by public cloud qualities and attributes. HPE Support Services enables you to choose the right service level, length of coverage, and response time to fit your business needs.
  • Page 188: Troubleshooting

    Troubleshooting Troubleshooting resources The HPE ProLiant Gen9 Troubleshooting Guide, Volume I: Troubleshooting provides procedures for resolving common problems and comprehensive courses of action for fault isolation and identification, issue resolution, and software maintenance on ProLiant servers and server blades. To view the guide, select a language: •...
  • Page 189: System Battery

    System battery If the node no longer automatically displays the correct date and time, then replace the battery that provides power to the real-time clock. Under normal use, battery life is 5 to 10 years. WARNING: The computer contains an internal lithium manganese dioxide, a vanadium pentoxide, or an alkaline battery pack.
  • Page 190: Warranty And Regulatory Information

    Warranty and regulatory information Warranty information HPE ProLiant and x86 Servers and Options HPE Enterprise Servers HPE Storage Products HPE Networking Products Regulatory information Belarus Kazakhstan Russia marking Manufacturer and Local Representative Information Manufacturer information: Hewlett Packard Enterprise Company, 3000 Hanover Street, Palo Alto, CA 94304 U.S. Local representative information Russian: •...
  • Page 191: Turkey Rohs Material Content Declaration

    Manufacturing date: The manufacturing date is defined by the serial number. CCSYWWZZZZ (serial number format for this product) Valid date formats include: • YWW, where Y indicates the year counting from within each new decade, with 2000 as the starting point; for example, 238: 2 for 2002 and 38 for the week of September 9.
  • Page 192: Electrostatic Discharge

    Electrostatic discharge Preventing electrostatic discharge About this task To prevent damaging the system, be aware of the precautions you must follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor may damage system boards or other static-sensitive devices.
  • Page 193: Specifications

    Specifications Environmental specifications Specification Value Temperature range — Operating 10°C to 35°C (50°F to 95°F) Nonoperating -30°C to 60°C (-22°F to 140°F) Relative humidity (noncondensing) — Operating Minimum to be the higher (more moisture) of -12°C (10.4°F) dew point or 8% relative humidity Maximum to be 24°C (75.2°F) dew point or 90% relative humidity Nonoperating...
  • Page 194 Specifications Value Dimensions — Height 8.73 cm (3.44 in) Depth 82.27 cm (32.40 in) Width 44.81 cm (17.64 in) Weight (approximate values) — Weight (maximum) 23.45 kg (51.70 lb) Weight (minimum) 9.86 kg (21.74 lb) HPE Apollo r2800 Chassis (24 SFF with storage expander backplane) Specifications Value Dimensions...
  • Page 195: Power Supply Specifications

    Specifications Value Weight (maximum) 6.47 kg (14.27lb) Weight (minimum) 4.73 kg (10.43 lb) Power supply specifications CAUTION: Do not mix power supplies with different efficiency and wattage in the chassis. Install only one type of power supply. Verify that all power supplies have the same part number and label color. The system becomes unstable and may shut down when it detects mismatched power supplies.
  • Page 196 Expansion boards installed in slot 2 of the FlexibleLOM 2U node riser cage assembly (PN 798184-B21) Description Maximum inlet ambient temperature Low-profile or single-width PCIe card (such as the 20°C (68°F) P440 Smart Array controller or P840 Smart Array controller) Single-width GPU accelerator (such as the NVIDIA 21°C (69.8°F) Quadro M4000 or the NVIDIA Quadro K4200)
  • Page 197 PCIe NIC cards Description Maximum inlet ambient temperature InfiniBand EDR/EN 100-GB 1-port 840QSFP28 20°C (68°F) if using an optical fiber cable in a 2U Adapter node 21°C (69.8°F) if using an optical fiber cable in a 1U node 22°C (71.6°F) if using a copper direct-attach cable in a 2U node 23°C (73.4°F) if using a copper cable in a 1U node InfiniBand EDR/EN 100-GB 2-port 840QSFP28...
  • Page 198: Support And Other Resources

    Support and other resources Accessing Hewlett Packard Enterprise Support • For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: http://www.hpe.com/assistance • To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: http://www.hpe.com/support/hpesc Information to collect •...
  • Page 199: Remote Support

    Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: http://www.hpe.com/support/selfrepair Remote support Remote support is available with supported devices as part of your warranty or contractual support...
  • Page 200: Regulatory Information

    Software Depot • Customer Self Repair • Insight Remote Support • Serviceguard Solutions for HP-UX • Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix • Storage white papers and analyst reports Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback ([email protected]).
  • Page 201: Acronyms And Abbreviations

    DIMMs per channel EuroAsian Economic Commission FBWC flash-backed write cache graphics processing unit HP SUM HP Smart Update Manager HPE APM HPE Advanced Power Manager HPE SIM HPE Systems Insight Manager HPE SSA HPE Smart Storage Administrator...
  • Page 202 International Electrotechnical Commission Integrated Lights-Out Integrated Management Log International Organization for Standardization large form factor LAN on Motherboard LRDIMM load reduced dual in-line memory module network interface controller nonmaskable interrupt NVRAM nonvolatile memory Onboard Administrator PCIe Peripheral Component Interconnect Express power distribution board power distribution unit POST...
  • Page 203 Remote Desktop Protocol RoHS Restriction of Hazardous Substances redundant power supply serial attached SCSI SATA serial ATA small form factor Systems Insight Manager Service Pack for ProLiant serial, USB, video TMRA recommended ambient operating temperature Trusted Platform Module UEFI Unified Extensible Firmware Interface unit identification universal serial bus Version Control Agent...

Table of Contents