HP XC System Software
Hardware Preparation Guide
Version 3.2.1
HP Part Number: A-XCHWP-321c
Published: October 2008
Download from Www.Somanuals.com. All Manuals Search And Download.
Table of Contents
About This Document.......................................................................................................11
1 Hardware and Network Overview............................................................................19
2 Cabling Server Blades.................................................................................................33
3 Making Node and Switch Connections....................................................................45
Table of Contents
3
Download from Www.Somanuals.com. All Manuals Search And Download.
4 Preparing Individual Nodes........................................................................................59
4
Table of Contents
Download from Www.Somanuals.com. All Manuals Search And Download.
5 Troubleshooting..........................................................................................................139
A Establishing a Connection Through a Serial Port...................................................141
B Server Blade Configuration Examples.....................................................................143
Glossary.........................................................................................................................147
Index...............................................................................................................................153
Table of Contents
5
Download from Www.Somanuals.com. All Manuals Search And Download.
List of Figures
6
List of Figures
Download from Www.Somanuals.com. All Manuals Search And Download.
7
Download from Www.Somanuals.com. All Manuals Search And Download.
List of Tables
8
List of Tables
Download from Www.Somanuals.com. All Manuals Search And Download.
9
Download from Www.Somanuals.com. All Manuals Search And Download.
10
Download from Www.Somanuals.com. All Manuals Search And Download.
About This Document
This document describes how to prepare the nodes in your HP cluster platform before installing
HP XC System Software.
An HP XC system is integrated with several open source software components. Some open source
software components are being used for underlying technology, and their deployment is
transparent. Some open source software components require user-level documentation specific
to HP XC systems, and that kind of information is included in this document when required.
HP relies on the documentation provided by the open source developers to supply the information
you need to use their product. For links to open source software documentation for products
Documentation for third-party hardware and software components that are supported on the
HP XC system is supplied by the third-party vendor. However, information about the operation
of third-party software is included in this document if the functionality of the third-party
component differs from standard behavior when used in the XC environment. In this case, HP
XC documentation supersedes information supplied by the third-party vendor. For links to
Standard Linux® administrative tasks or the functions provided by standard Linux tools and
commands are documented in commercially available Linux reference manuals and on various
Web sites. For more information about obtaining documentation for standard Linux administrative
tasks and associated topics, see the list of Web sites and additional publications provided in
Intended Audience
The information in this document is written for technicians or administrators who have the task
of preparing the hardware on which the HP XC System Software will be installed.
Before beginning, you must meet the following requirements:
•
You are familiar with accessing BIOS and consoles with either Ethernet or serial port
connections and terminal emulators.
•
•
You have access to and have read the HP Cluster Platform documentation.
You have access to and have read the HP server blade documentation if the hardware
configuration contains HP server blade models.
•
You have previous experience with a Linux operating system.
New and Changed Information in This Edition
•
This document was updated to include the following servers:
CP3000BL platform:
HP ProLiant BL2x220c G5 server blade
—
◦
Typographic Conventions
This document uses the following typographical conventions:
%, $, or #
A percent sign represents the C shell system prompt. A dollar
sign represents the system prompt for the Korn, POSIX, and
Bourne shells. A number sign represents the superuser prompt.
audit(5)
A manpage. The manpage name is audit, and it is located in
Section 5.
Command
A command name or qualified command phrase.
Text displayed by the computer.
Computer output
Intended Audience
11
Download from Www.Somanuals.com. All Manuals Search And Download.
Ctrl+x
A key sequence. A sequence such as Ctrl+x indicates that you
must hold down the key labeled Ctrl while you press another
key or mouse button.
ENVIRONMENT VARIABLE
[ERROR NAME]
Key
The name of an environment variable, for example, PATH.
The name of an error, usually returned in the errnovariable.
The name of a keyboard key. Return and Enter both refer to the
same key.
Term
User input
The defined use of an important word or phrase.
Commands and other text that you type.
Variable
The name of a placeholder in a command, function, or other
syntax display that you replace with an actual value.
[ ]
The contents are optional in syntax. If the contents are a list
separated by |, you can choose one of the items.
{ }
The contents are required in syntax. If the contents are a list
separated by |, you must choose one of the items.
. . .
The preceding element can be repeated an arbitrary number of
times.
|
Separates items in a list of choices.
WARNING
A warning calls attention to important information that if not
understood or followed will result in personal injury or
nonrecoverable system problems.
CAUTION
A caution calls attention to important information that if not
understood or followed will result in data loss, data corruption,
or damage to hardware or software.
IMPORTANT
NOTE
This alert provides essential information to explain a concept or
to complete a task.
A note contains additional information to emphasize or
supplement important points of the main text.
HP XC and Related HP Products Information
The HP XC System Software Documentation Set, the Master Firmware List, and HP XC HowTo
documents are available at this HP Technical Documentation Web site:
The HP XC System Software Documentation Set includes the following core documents:
HP XC System Software Release Notes
HP XC Hardware Preparation Guide
Describes important, last-minute information about firmware,
software, or hardware that might affect the system. This
document is not shipped on the HP XC documentation CD. It
is available only on line.
Describes hardware preparation tasks specific to HP XC that
are required to prepare each supported hardware model for
installation and configuration, including required node and
switch connections.
HP XC System Software Installation Guide
HP XC System Software Administration Guide
Provides step-by-step instructions for installing the HP XC
System Software on the head node and configuring the system.
Provides an overview of the HP XC system administrative
environment, cluster administration tasks, node maintenance
tasks, LSF® administration tasks, and troubleshooting
procedures.
12
Download from Www.Somanuals.com. All Manuals Search And Download.
HP XC System Software User's Guide
Provides an overview of managing the HP XC user environment
with modules, managing jobs with LSF, and describes how to
build, run, debug, and troubleshoot serial and parallel
applications on an HP XC system.
QuickSpecs for HP XC System Software
Provides a product overview, hardware requirements, software
requirements, software licensing information, ordering
information, and information about commercially available
software that has been qualified to interoperate with the HP XC
System Software. The QuickSpecs are located on line:
See the following sources for information about related HP products.
HP XC Program Development Environment
The Program Development Environment home page provide pointers to tools that have been
tested in the HP XC program development environment (for example, TotalView® and other
debuggers, compilers, and so on).
HP Message Passing Interface
HP Message Passing Interface (HP-MPI) is an implementation of the MPI standard that has been
integrated in HP XC systems. The home page and documentation is located at the following Web
site:
HP Serviceguard
HP Serviceguard is a service availability tool supported on an HP XC system. HP Serviceguard
enables some system services to continue if a hardware or software failure occurs. The HP
Serviceguard documentation is available at the following Web site:
HP Scalable Visualization Array
The HP Scalable Visualization Array (SVA) is a scalable visualization solution that is integrated
with the HP XC System Software. The SVA documentation is available at the following Web site:
HP Cluster Platform
The cluster platform documentation describes site requirements, shows you how to set up the
servers and additional devices, and provides procedures to operate and manage the hardware.
These documents are available at the following Web site:
HP Integrity and HP ProLiant Servers
Documentation for HP Integrity and HP ProLiant servers is available at the following web
address:
For c-Class Server BladeSystems, see also the installation, administration, and user guides for
the following components:
•
•
HP (ProLiant or Integrity) C-Class Server Blades
HP BladeSystem c-Class Onboard Administrator
HP XC and Related HP Products Information
13
Download from Www.Somanuals.com. All Manuals Search And Download.
•
•
HP Server Blade c7000 Enclosure
HP BladeSystem c3000 Enclosure
Related Information
This section provides useful links to third-party, open source, and other related software products.
Supplementary Software Products This section provides links to third-party and open source
software products that are integrated into the HP XC System Software core technology. In the
HP XC documentation, except where necessary, references to third-party and open source
software components are generic, and the HP XC adjective is not added to any reference to a
third-party or open source command or product name. For example, the SLURM sruncommand
is simply referred to as the sruncommand.
The location of each web address or link to a particular topic listed in this section is subject to
change without notice by the site provider.
•
Home page for Platform Computing Corporation, the developer of the Load Sharing Facility
(LSF). LSF-HPC with SLURM, the batch system resource manager used on an HP XC system,
is tightly integrated with the HP XC and SLURM software. Documentation specific to
LSF-HPC with SLURM is provided in the HP XC documentation set.
Standard LSF is also available as an alternative resource management system (instead of
LSF-HPC with SLURM) for HP XC. This is the version of LSF that is widely discussed on
the Platform web address.
For your convenience, the following Platform Computing Corporation LSF documents are
shipped on the HP XC documentation CD in PDF format:
—
—
—
—
—
Administering Platform LSF
Administration Primer
Platform LSF Reference
Quick Reference Card
Running Jobs with Platform LSF
LSF procedures and information supplied in the HP XC documentation, particularly the
documentation relating to the LSF-HPC integration with SLURM, supersedes the information
supplied in the LSF manuals from Platform Computing Corporation.
The Platform Computing Corporation LSF manpages are installed by default. lsf_diff(7)
supplied by HP describes LSF command differences when using LSF-HPC with SLURM on
an HP XC system.
The following documents in the HP XC System Software Documentation Set provide
information about administering and using LSF on an HP XC system:
—
—
HP XC System Software Administration Guide
HP XC System Software User's Guide
•
•
Documentation for the Simple Linux Utility for Resource Management (SLURM), which is
integrated with LSF to manage job and compute resources on an HP XC system.
Home page for Nagios®, a system and network monitoring application that is integrated
into an HP XC system to provide monitoring capabilities. Nagios watches specified hosts
and services and issues alerts when problems occur and when problems are resolved.
•
Home page of RRDtool, a round-robin database tool and graphing system. In the HP XC
system, RRDtool is used with Nagios to provide a graphical view of system status.
14
Download from Www.Somanuals.com. All Manuals Search And Download.
•
Home page for Supermon, a high-speed cluster monitoring system that emphasizes low
perturbation, high sampling rates, and an extensible data protocol and programming
interface. Supermon works in conjunction with Nagios to provide HP XC system monitoring.
•
•
Home page for the parallel distributed shell (pdsh), which executes commands across HP
XC client nodes in parallel.
Home page for syslog-ng, a logging tool that replaces the traditional syslogfunctionality.
The syslog-ngtool is a flexible and scalable audit trail processing tool. It provides a
centralized, securely stored log of all devices on the network.
•
Home page for SystemImager®, which is the underlying technology that distributes the
golden image to all nodes and distributes configuration changes throughout the system.
•
•
Home page for the Linux Virtual Server (LVS), the load balancer running on the Linux
operating system that distributes login requests on the HP XC system.
Home page for Macrovision®, developer of the FLEXlm™ license management utility, which
is used for HP XC license management.
•
•
Web address for Modules, which provide for easy dynamic modification of a user's
environment through modulefiles, which typically instruct the modulecommand to alter
or set shell environment variables.
Home page for MySQL AB, developer of the MySQL database. This web address contains
a link to the MySQL documentation, particularly the MySQL Reference Manual.
Related Software Products and Additional Publications This section provides pointers to web
addresses for related software products and provides references to useful third-party publications.
The location of each web address or link to a particular topic is subject to change without notice
by the site provider.
Linux Web Addresses
•
Home page for Red Hat®, distributors of Red Hat Enterprise Linux Advanced Server, a
Linux distribution with which the HP XC operating environment is compatible.
•
This web address for the Linux Documentation Project (LDP) contains guides that describe
aspects of working with Linux, from creating your own Linux system from scratch to bash
script writing. This site also includes links to Linux HowTo documents, frequently asked
questions (FAQs), and manpages.
Related Information
15
Download from Www.Somanuals.com. All Manuals Search And Download.
•
•
Web address providing documents and tutorials for the Linux user. Documents contain
instructions for installing and using applications for Linux, configuring hardware, and a
variety of other topics.
Home page for the GNU Project. This site provides online software and information for
many programs and utilities that are commonly used on GNU/Linux systems. Online
information include guides for using the bashshell, emacs, make, cc, gdb, and more.
MPI Web Addresses
•
Contains the official MPI standards documents, errata, and archives of the MPI Forum. The
MPI Forum is an open group with representatives from many organizations that define and
maintain the MPI standard.
•
A comprehensive site containing general information, such as the specification and FAQs,
and pointers to other resources, including tutorials, implementations, and other MPI-related
sites.
Compiler Web Addresses
•
Web address for Intel® compilers.
•
•
Web address for general Intel software development information.
Home page for The Portland Group™, supplier of the PGI® compiler.
Debugger Web Address
Home page for Etnus, Inc., maker of the TotalView® parallel debugger.
Software RAID Web Addresses
•
•
A document (in two formats: HTML and PDF) that describes how to use software RAID
under a Linux operating system.
Provides information about how to use the mdadmRAID management utility.
Additional Publications
For more information about standard Linux system administration or other related software
topics, consider using one of the following publications, which must be purchased separately:
•
•
•
•
Linux Administration Unleashed, by Thomas Schenk, et al.
Linux Administration Handbook, by Evi Nemeth, Garth Snyder, Trent R. Hein, et al.
Managing NFS and NIS, by Hal Stern, Mike Eisler, and Ricardo Labiaga (O'Reilly)
MySQL, by Paul Debois
16
Download from Www.Somanuals.com. All Manuals Search And Download.
•
•
•
•
MySQL Cookbook, by Paul Debois
High Performance MySQL, by Jeremy Zawodny and Derek J. Balling (O'Reilly)
Perl Cookbook, Second Edition, by Tom Christiansen and Nathan Torkington
Perl in A Nutshell: A Desktop Quick Reference , by Ellen Siever, et al.
Manpages
Manpages provide online reference and command information from the command line. Manpages
are supplied with the HP XC system for standard HP XC components, Linux user commands,
LSF commands, and other software components that are distributed with the HP XC system.
Manpages for third-party software components might be provided as a part of the deliverables
for that component.
Using discover(8) as an example, you can use either one of the following commands to display a
manpage:
$ man discover
$ man 8 discover
If you are not sure about a command you need to use, enter the mancommand with the -koption
to obtain a list of commands that are related to a keyword. For example:
$ man -k keyword
HP Encourages Your Comments
HP encourages comments concerning this document. We are committed to providing
documentation that meets your needs. Send any errors found, suggestions for improvement, or
compliments to:
Include the document title, manufacturing part number, and any comment, error found, or
suggestion for improvement you have concerning this document.
Manpages
17
Download from Www.Somanuals.com. All Manuals Search And Download.
18
Download from Www.Somanuals.com. All Manuals Search And Download.
1 Hardware and Network Overview
This chapter addresses the following topics:
•
•
•
•
•
•
•
•
•
1.1 Supported Cluster Platforms
An HP XC system is made up of interconnected servers.
A typical HP XC hardware configuration (on systems other than Server Blade c-Class servers)
contains from 5 to 512 nodes. To allow systems of a greater size, an HP XC system can be arranged
into a large-scale configuration with up to 1,024 compute nodes (HP might consider larger systems
as special cases).
HP Server Blade c-Class servers (hereafter called server blades) are perfectly suited to form HP
XC systems. Physical characteristics make it possible to have many tightly interconnected nodes
while at the same time reducing cabling requirements. Typically, server blades are used as
compute nodes but they can also function as the head node and service nodes. The hardware
and network configuration on an HP XC system with HP server blades differs from that of a
traditional HP XC system, and those differences are described in this document.
You can install and configure HP XC System Software on the following platforms:
•
•
•
•
•
•
HP Cluster Platform 3000 (CP3000)
HP Cluster Platform 3000BL (CP3000BL) with HP c-Class server blades
HP Cluster Platform 4000 (CP4000)
HP Cluster Platform 4000BL (CP4000BL) with HP c-Class server blades
HP Cluster Platform 6000 (CP6000).
HP Cluster Platform 6000BL (CP6000BL) with HP c-Class server blades
For more information about the cluster platforms, see the documentation that was shipped with
the hardware.
1.1.1 Supported Processor Architectures and Hardware Models
Table 1-1 lists the hardware models that are supported for each HP cluster platform.
1.1 Supported Cluster Platforms
19
Download from Www.Somanuals.com. All Manuals Search And Download.
IMPORTANT: A hardware configuration can contain a mixture of Opteron and Xeon nodes,
but not Itanium nodes.
Table 1-1 Supported Processor Architectures and Hardware Models
Server Type
Cluster Platform
Processor Architecture
Hardware Model
Blade
CP3000BL
Intel® Xeon™ with EM64T • HP ProLiant BL2x220c G5
• HP ProLiant BL260c G5
• HP ProLiant BL460c
• HP ProLiant BL480c
• HP ProLiant BL680c G5
CP4000BL
AMD Opteron®
• HP ProLiant BL465c
• HP ProLiant BL465c G5
• HP ProLiant BL685c
• HP ProLiant BL685c G5
CP6000BL
CP3000
Intel Itanium®
• HP Integrity BL860c
Non-Blade
Intel Xeon with EM64T
• HP ProLiant DL140 G2
• HP ProLiant DL140 G3
• HP ProLiant DL160 G5
• HP ProLiant DL360 G4
• HP ProLiant DL360 G4p
• HP ProLiant DL360 G5
• HP ProLiant DL380 G4
• HP ProLiant DL380 G5
• HP ProLiant DL580 G4
• HP ProLiant DL580 G5
• HP xw8200 Workstation
• HP xw8400 Workstation
• HP xw8600 Workstation
CP4000
AMD Opteron
• HP ProLiant DL145
• HP ProLiant DL145 G2
• HP ProLiant DL145 G3
• HP ProLiant DL165 G5
• HP ProLiant DL365
• HP ProLiant DL365 G5
• HP ProLiant DL385
• HP ProLiant DL385 G2
• HP ProLiant DL385 G5
• HP ProLiant DL585
• HP ProLiant DL585 G2
• HP ProLiant DL585 G5
• HP ProLiant DL785 G5
• HP xw9300 Workstation
• HP xw9400 Workstation
CP6000
Intel Itanium
• HP Integrity rx1620
• HP Integrity rx2600
• HP Integrity rx2620
• HP Integrity rx2660
• HP Integrity rx4640
• HP Integrity rx8620
20
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
HP server blades offer an entirely modular computing system with separate computing and
physical I/O modules that are connected and shared through a common chassis, called an
enclosure; for more information on enclosures, see “Server Blade Enclosure Components”
(page 22). Full-height Opteron server blades can take up to four dual core CPUs and Xeon server
blades can take up to two quad cores.
Table 1-2 lists the HP ProLiant hardware models supported for use in an HP XC hardware
configuration.
Table 1-2 Supported HP ProLiant Server Blade Models
Core
HP Proliant
Blade Model
Hot Plug
Drives
Mezzanine
Slots
Height
Processor
Number
Built-In NICs
BL2x220c G5
half
Intel Xeon
up to two quad core or
2 (1 per
0
2 (1 per
server node)
up to two dual core per server node)
server node
BL260c G5
BL460c
half
half
half
half
full
full
Intel Xeon
Intel Xeon
up to two quad core or
up to two dual core
1
2
2
2
4
4
0
2
2
2
4
2
1
2
2
2
3
3
up to two quad core or
up to two dual core
BL465c
AMD Opteron up to two single core or
up to two dual core
BL465c G5
BL480c
AMD Opteron up to two single core or
up to two dual core
Intel Xeon
up to two quad core or
up to two dual core
BL680c G5
Intel Xeon
two quad core or four
quad core
BL685c
full
full
full
AMD Opteron up to four dual core
AMD Opteron up to four dual core
4
4
4
2
2
2
3
3
3
BL685c G5
BL860c
Intel Itanium® up to two quad core or
dual core
For more information on an individual server blade, see the QuickSpec for your model. The
QuickSpecs are located at the following Web address:
1.1.2 Supported Server Blade Combinations
The HP XC System Software supports the following server blade hardware configurations:
•
•
•
A hardware configuration composed entirely of HP server blades, that is, the head node,
the service nodes, and all compute nodes are server blades.
A hardware configuration can contain a mixture of Opteron and Xeon server blades, but
not Itanium server blades.
A mixed hardware configuration of HP server blades and non-blade servers where:
—
—
—
The head node can be either a server blade or a non-blade server
Service nodes can be either server blades or non-blade servers
All compute nodes are server blades
1.1 Supported Cluster Platforms
21
Download from Www.Somanuals.com. All Manuals Search And Download.
1.2 Server Blade Enclosure Components
HP server blades are contained in an enclosure, which is a chassis that houses and connects blade
hardware components. An enclosure is managed by an Onboard Administrator. The HP
BladeSystem c7000 and c3000 enclosures are supported under HP XC.
This section discusses the following topics:
•
•
•
•
For more information about enclosures and their related components, see the HP Server Blade
c7000 Enclosure Setup and Installation Guide.
1.2.1 HP BladeSystem c7000 Enclosure
Figure 1-1 shows front and rear views of the HP BladeSystem c7000 enclosure.
Figure 1-1 HP BladeSystem c7000 enclosure (Front and Rear Views)
Figure 1-2 is an illustration showing the location of the device bays, power supply bays, and the
Insight Display at the front of the HP BladeSystem c7000 enclosure.
Figure 1-2 HP BladeSystem c7000 Enclosure Bay Locations (Front View)
1
Device bays
22
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
2
3
Power supply bays
half-height or 8 full-height server blades. The c7000 enclosure can contain a maximum of 6 power
which server blades are inserted. The numbering scheme differs for half height and full height
server blades.
Figure 1-3 HP BladeSystem c7000 Enclosure Bay Numbering for Half Height and Full Height Server
Blades
The number of fans in the enclosure influences the placement of the server blades. Use the
blades in the enclosure, based on the number of fans.
Insert Half-Height Server Blades in
These Bays
Insert Full-Height Server Blades in These
Bays
Number of Fans
4 fans
1
1, 2, 9, 10
1 or 2
6 fans
1, 2, 3, 4, 9, 10, 11, 12
all server bays
1, 2, 3, 4
8 or 10 fans
all server bays
1
Only two servers are supported in this configuration. They can be inserted in any two of these bays.
Figure 1-4 is an illustration showing the location of the fan bays, interconnect bays, onboard
administrator bays, the power supply exhaust vent, and the AC power connections at the rear
of the HP BladeSystem c7000 enclosure. This figure includes an inset showing the serial connector,
onboard administrator/iLO port, and the enclosure Uplink and Downlink ports.
1.2 Server Blade Enclosure Components
23
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 1-4 HP BladeSystem c7000 Enclosure Bay Locations (Rear View)
1
2
3
4
5
6
7
8
Fan Bays
Interconnect Bay #7
9
Interconnect Bay #1
Interconnect Bay #2
Interconnect Bay #3
Interconnect Bay #4
Interconnect Bay #5
Interconnect Bay #6
Interconnect Bay #8
10
11
12
13
Onboard Administrator Bay 1
Onboard Administrator Bay 2
Power Supply Exhaust Vent
AC Power Connections
1
2
3
4
Onboard Administrator/Integrated Lights Out port
Serial connector
Enclosure Downlink port
Enclosure Uplink port
General Configuration Guidelines
The following are general guidelines for configuring HP BladeSystem c7000 enclosures:
•
•
Up to four enclosures can be mounted in an HP 42U Infrastructure Rack.
If an enclosure is not fully populated with fans and power supplies, see the positioning
guidelines in the HP BladeSystem c7000 enclosure documentation.
•
•
Enclosures are cabled together using their uplink and downlink ports.
The top uplink port in each rack is used as a service port to attach a laptop or other device
for initial configuration or subsequent debugging.
24
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
Specific HP XC Setup Guidelines
The following enclosure setup guidelines are specific to HP XC:
•
On every HP BladeSystem c7000 enclosure, an Ethernet interconnect module (either a switch
the administration network.
•
Hardware configurations that use Gigabit Ethernet as the interconnect require an additional
Ethernet interconnect module (either a switch or pass-through module) to be installed in
•
•
Systems that use InfiniBand as the interconnect require a double-wide InfiniBand interconnect
Some systems might need an additional Ethernet interconnect module to support server
blades that require external connections. For more information about external connections,
1.2.2 HP BladeSystem c3000 Enclosure
Figure 1-5 shows the front and rear views of the HP BladeSystem c3000 enclosure.
Figure 1-5 HP BladeSystem c3000 Enclosure (Front and Rear Views)
Figure 1-6 HP BladeSystem c3000 Enclosure Tower Model
Figure 1-7 is an illustration showing the location of the device bays, optional DVD drive, Insight
display, and Onboard Administrator at the front of the HP BladeSystem c3000 Enclosure.
1.2 Server Blade Enclosure Components
25
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 1-7 HP BladeSystem c3000 Enclosure Bay Locations (Front View)
1
Device bays
2
DVD drive (optional)
3
Onboard Administrator (OA)
4
The HP BladeSystem c3000 enclosure can house a maximum of 8 half-height or 4 full-height
server blades. Additionally, the c3000 enclosure contains an integrated DVD drive, which is
server bays of the HP BladeSystem c3000 enclosure for both half height and full height server
blades.
Figure 1-8 HP BladeSystem c3000 Enclosure Bay Numbering
The number of fans in the enclosure influences the placement of the server blades. Use the
blades in the enclosure, based on the number of fans.
Insert Half-Height Server Blades in
These Bays
Insert Full-Height Server Blades in These
Bays
Number of Fans
4 fan configuration
6 fan configuration
1, 2, 5, 6
any
1, 2
any
Figure 1-9 is an illustration showing the location of the interconnect bays, fan bays, onboard
administrator bays, the enclosure/onboard administrator link module, and power supplies at
the rear of the HP BladeSystem c3000 enclosure. This figure includes an inset showing the onboard
administrator/iLO port, and the enclosure Uplink and downlink ports.
26
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 1-9 HP BladeSystem c3000 Enclosure Bay Locations (Rear View)
1
Interconnect bay #1
2
Fans
3
Interconnect bay #2
4
Enclosure/Onboard Administrator Link Module
5
Power Supplies
6
Interconnect bay #3
7
Interconnect bay #4
1
Enclosure Downlink port
2
Enclosure Uplink port
3
Onboard Administrator/Integrated Lights Out port
Specific HP XC Setup Guidelines
The following enclosure setup guidelines are specific to HP XC:
•
On every enclosure, an Ethernet interconnect module (either a switch or pass-through
network.
•
•
•
Hardware configurations that use Gigabit Ethernet as the interconnect can share with the
administration network in interconnect bay #1.
Systems that use InfiniBand as the interconnect require a double-wide InfiniBand interconnect
Some systems might need an additional Ethernet interconnect module to support server
blades that require external connections. For more information about external connections,
1.2.3 HP BladeSystem c-Class Onboard Administrator
The Onboard Administrator is the management device for an enclosure, and at least one Onboard
Administrator is installed in every enclosure.
1.2 Server Blade Enclosure Components
27
Download from Www.Somanuals.com. All Manuals Search And Download.
You can access the Onboard Administrator through a graphical Web-based user interface, a
command-line interface, or the simple object access protocol (SOAP) to configure and monitor
the enclosure.
You can add a second Onboard Administrator to provide redundancy.
The Onboard Administrator requires a password. For information on setting the Onboard
1.2.4 Insight Display
The Insight Display is a small LCD panel on the front of an enclosure that provides instant access
to important information about the enclosure such as the IP address and color-coded status.
Figure 1-10 Server Blade Insight Display
You can use the Insight Display panel to make some basic enclosure settings.
1.3 Server Blade Mezzanine Cards
The mezzanine slots on each server blade provide additional I/O capability.
Mezzanine cards are PCI-Express cards that attach inside the server blade through a special
connector and have no physical I/O ports on them.
Card types include Ethernet, fibre channel, or 10 Gigabit Ethernet.
1.4 Server Blade Interconnect Modules
An interconnect module provides the physical I/O for the built-in NICs or the supplemental
mezzanine cards on the server blades. An interconnect module can be either a switch or a pass-thru
module.
A switch provides local switching and minimizes cabling. Switch models that are supported as
interconnect modules include, but are not limited to:
•
•
•
•
Nortel GbE2c Gigabit Ethernet switch
Cisco Catalyst Gigabit Ethernet switch
HP 4x DDR InfiniBand switch
Brocade SAN switch
A pass-thru module provides direct connections to the individual ports on each node and does
not provide any local switching.
Bays in the back of each enclosure correspond to specific interfaces on the server blades. Thus,
all I/O devices that correspond to a specific interconnect bay must be the same type.
Interconnect Bay Port Mapping
Connections between the server blades and the interconnect bays are hard wired. Each of the 8
interconnect bays in the back of the enclosure has a connection to each of the 16 server bays in
the front of the enclosure. The built-in NIC or mezzanine card into which the interconnect blade
connects depends on which interconnect bay it is plugged into. Because full-height blades consume
two server bays, they have twice as many connections to each of the interconnect bays.
28
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
See the HP BladeSystem Onboard Administrator User Guide for illustrations of interconnect bay
port mapping connections on half- and full-height server blades.
1.5 Supported Console Management Devices
Table 1-3 lists the supported console management device for each hardware model within each
cluster platform. The console management device provides remote access to the console of each
node, enabling functions such as remote power management, remote console logging, and remote
boot.
HP workstation models do not have console ports.
HP ProLiant servers provide remote management features through a baseboard management
controller (BMC). The BMC enables functions such as remote power management and remote
boot. HP ProLiant BMCs comply with a specified release of the industry-standard Intelligent
Platform Management Interface (IPMI). HP XC supports two IPMI-compliant BMCs: integrated
lights out (iLO and iLO2) and Lights-Out 100i (LO-100i), depending on the server model.
Each HP ProLiant server blade has a built-in Integrated Lights Out (iLO2) device that provides
full remote power control and serial console access. You can access the iLO2 device through the
Onboard Administrator. On server blades, iLO2 advanced features are enabled by default and
include the following:
•
Full remote graphics console access including full keyboard, video, mouse (KVM) access
through a Web browser
•
Support for remote virtual media which enables you to mount a local CD or diskette and
serve it to the server blade over the network
Each HP Integrity server blade has a built-in management processor (MP) device that provides
full remote power control and serial console access. You can access the MP device by connecting
a serial terminal or laptop serial port to the local IO cable that is connected to the server blade.
Hardware models that use iLO and iLO2 need certain settings that cannot be made until the iLO
has an IP address. The HP XC System Software Installation Guide provides instructions for using
a browser to connect to the iLO and iLO2 to enable telnetaccess.
Table 1-3 Supported Console Management Devices
Hardware Component
CP3000
Firmware Dependency
HP ProLiant DL140 G2
HP ProLiant DL140 G3
HP ProLiant DL160 G5
HP ProLiant DL360 G4
HP ProLiant DL360 G4p
HP ProLiant DL360 G5
HP ProLiant DL380 G4
HP ProLiant DL380 G5
HP ProLiant DL580 G4
HP ProLiant DL580 G5
CP3000BL
Lights-out 100i management (LO-100i), system BIOS
LO-100i, system BIOS
LO-100i, system BIOS
Integrated lights out (iLO), system BIOS
iLO, system BIOS
iLO2, system BIOS
iLO, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
HP ProLiant BL2x220c G5
HP ProLiant BL260c G5
iLO2, system BIOS, Onboard Administrator (OA)
iLO2, system BIOS, Onboard Administrator (OA)
1.5 Supported Console Management Devices
29
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 1-3 Supported Console Management Devices (continued)
Hardware Component
HP ProLiant BL460c
HP ProLiant BL480c
HP ProLiant BL680c G5
CP4000
Firmware Dependency
iLO2, system BIOS, Onboard Administrator (OA)
iLO2, system BIOS, OA
iLO2, system BIOS, Onboard Administrator (OA)
HP ProLiant DL145
HP ProLiant DL145 G2
HP ProLiant DL145 G3
HP ProLiant DL165 G5
HP ProLiant DL365
HP ProLiant DL365 G5
HP ProLiant DL385
HP ProLiant DL385 G2
HP ProLiant DL385 G5
HP ProLiant DL585
HP ProLiant DL585 G2
HP ProLiant DL585 G5
HP ProLiant DL785 G5
CP4000BL
LO-100i
LO-100i
LO-100i
LO-100i
iLO2
iLO2
iLO2
iLO2
iLO2
iLO2
iLO2
iLO2
iLO2
HP ProLiant BL465c
HP ProLiant BL465c G5
HP ProLiant BL685c
HP ProLiant BL685c G5
CP6000
iLO2
iLO2
iLO2
iLO2
HP Integrity rx1620
HP Integrity rx2600
HP Integrity rx2620
HP Integrity rx2660
HP Integrity rx4640
HP Integrity rx8620
CP6000BL
Management Processor (MP)
MP
MP
MP
MP
MP
HP Integrity BL860c Server Blade (Full-height) MP
1.6 Administration Network Overview
The administration network is a private network within the HP XC system that is used primarily
for administrative operations. This network is treated as a flat network during run time (that is,
communication between any two points in the network is equal in communication time between
any other two points in the network). However, during the installation and configuration of the
30
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
HP XC system, the administrative tools probe and discover the topology of the administration
network. The administration network requires and uses Gigabit Ethernet.
The administration network has at least one Root Administration Switch and can have multiple
1.7 Administration Network: Console Branch
The console branch is part of the private administration network within an HP XC system that
is used primarily for managing and monitoring the consoles of the nodes that comprise the HP
XC system. This branch of the network uses 10/100 Mbps Ethernet.
During the installation and configuration of the HP XC system, the administrative tools probe
and discover the topology of the entire administration network including the console branch.
A (nonblade) HP XC system has at least one Root Console Switch with the potential for multiple
Figure 1-11 Administration Network: Console Branch (Without HP Server Blades)
Specialized Role Nodes
Head
Node
Root Console
Switch
Root Administration Switch
Branch Console Switches
Compute Nodes
1.8 Interconnect Network
The interconnect network is a private network within the HP XC system. Typically, every node
in the HP XC system is connected to the interconnect.
The interconnect network is dedicated to communication between processors and access to data
in storage areas. It provides a high-speed communications path used primarily for user file
service and for communications within applications that are distributed among nodes of the
cluster.
Table 1-4 lists the supported interconnect types on each cluster platform. The interconnect types
are displayed in the context of an interconnect family, in which InfiniBand products constitute
one family, Quadrics® QsNetII® constitutes another interconnect family, and so on. For more
information about the interconnect types on individual hardware models, see the cluster platform
documentation.
1.7 Administration Network: Console Branch
31
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 1-4 Supported Interconnects
INTERCONNECT FAMILIES
InfiniBand PCI
InfiniBand
ConnectX
Cluster
Express Single
Data Rate and
Double Data
Rate (DDR)
Platform and
Double Data
Rate (DDR)
Hardware
Model
Gigabit
Ethernet
InfiniBand®
PCI-X
Myrinet® (Rev.
D, E, and F)
QsNetII
1
CP3000
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
CP3000BL
CP4000
2
X
X
CP4000BL
CP6000
3
3
X
X
X
1
2
3
Mellanox ConnectX Infiniband Cards require OFED Version 1.2.5 or later.
The HP ProLiant DL385 G2 and DL145 G3 servers require a PCI Express card in order to use this interconnect.
This interconnect is supported by CP6000 hardware models with PCI Express.
Mixing Adapters
Within a given interconnect family, several different adapters can be supported. However,
HP requires that all adapters must be from the same interconnect family; a mix of adapters
from different interconnect families is not supported.
InfiniBand Double Data Rate
All components in a network must be DDR to achieve DDR performance levels.
ConnectX InfiniBand Double Data Rate
Currently ConnectX adapters cannot be mixed with other types of adapters.
Myrinet Adapters
The Myrinet adapters can be either the single-port M3F-PCIXD-2 (Rev. D) or the dual port
M3F2–PCIXE-2 (Rev. E and Rev F); mixing adapter types is not supported.
QsNetII
The QsNetII high-speed interconnect from Quadrics, Ltd. is the only version of Quadrics
interconnect that is supported.
1.9 Large-Scale Systems
A typical HP XC system contains from 5 to 512 nodes. To allow systems of a greater size, an HP
XC system can be arranged into a large-scale configuration with up to 1024 compute nodes (HP
might consider larger systems as special cases).
This configuration arranges the HP XC system as a collection of hardware regions that are tied
together through a ProCurve 2848 Ethernet switch.
The nodes of the large-scale system are divided as equally as possible between the individual
HP XC systems, which are known as regions. The head node for a large-scale HP XC system is
always the head node of region 1.
32
Hardware and Network Overview
Download from Www.Somanuals.com. All Manuals Search And Download.
2 Cabling Server Blades
The following topics are addressed in this chapter:
•
•
•
•
•
•
2.1 Blade Enclosure Overview
An HP XC blade cluster is made up of one or more "Blade Enclosures" connected together as a
cluster. Each blade enclosure must contain the following:
•
•
•
1 to 16 blade servers
1 Ethernet Interconnect blade in bay 1 for the Administration Network
1 Onboard Administrator (OA) for managing the enclosure
NOTE: Enclosures might also have a redundant Onboard Administrator.
•
The requisite number of fans and power supplies to fill the needs of all the hardware
In addition, each enclosure needs an additional blade interconnect module for the cluster
interconnect. On a Gigabit Ethernet (GigE) cluster, this could be either another Ethernet Switch
or an Ethernet pass-thru module in bay 2. On an InfiniBand (IB) cluster, this would be one of the
double-wide IB Blade switches in bays 5 and 6.
In certain circumstances, there might need to be an additional Ethernet interconect blade needed
to support any required external connections. This would only be needed on Gigabit Ethernet
clusters with half-height blades that need external connections. For more information, see
The various enclosures that make up a cluster are connected to each other through external
ProCurve switches. Every cluster needs at least one ProCurve Administrative Network switch
(a 2800 series) and may optionally have a Console Network Switch (a 2600 series). It is possible
to have the Console and Administrative Network combined over the single 2800 series switch
on smaller configurations.
Gigabit Ethernet clusters require one or more external Ethernet switches to act as the cluster
interconnect between the enclosures. This can be set up one of two ways.
•
If the cluster uses Ethernet switches in each enclosure, then you need a smaller external
interconnect because you only need one connection for each enclosure in the cluster (although
this might be a trunked connection).
•
If the cluster uses Ethernet pass-through modules in each enclosure, you need a large external
Ethernet switch with enough connections for each node in the cluster.
InfiniBand clusters require one or more external IB switches, with at least one managed switch
to manage the fabric.
2.2 Network Overview
An HP XC system consists of several networks: administration, console, interconnect, and external
(public). In order for these networks to function, you must connect the enclosures, server blades,
and switches according to the guidelines provided in this chapter.
2.1 Blade Enclosure Overview
33
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3 (page 45) describes specific node and switch connections for non-blade hardware
configurations.
A hardware configuration with server blades does not have these specific cabling requirements;
specific switch port assignments are not required. However, HP recommends a logical ordering
of the cables on the switches to facilitate serviceability. Enclosures are discovered in port order,
so HP recommends that you cable them in the order you want them to be numbered. Also, HP
recommends that you cable the enclosures in lower ports and cable the external nodes in the
ports above them.
The configuration of an HP XC Blade System depends on its size. Larger clusters require additional
switches to manage the additional enclosures or regions. Figure 2-1 (page 35), Figure 2-2 (page 36),
and large systems.
interconnect type and server blade height to use as a reference.
Small HP XC Cluster of Server Blades
Figure 2-1 (page 35) provides two illustrations of a small HP XC cluster of four enclosures and
a maximum of 64 nodes.
The top portion shows a Gigabit Ethernet switch with two connections for each of the four
enclosures: one connection to the (ProCurve managed) Gigabit Ethernet switch on the enclosure
and the other to the Onboard Administrator.
The bottom portion provides some additional detail. It shows the ProCurve managed Gigabit
Ethernet switch connected to the Gigabit Ethernet switch in bay 1 of each enclosure and to the
Primary Onboard Administrator External Link of each enclosure.
34
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-1 Interconnection Diagram for a Small HP XC Cluster of Server Blades
ProCurve managed
GigE Switch
Enclosure 1
Switch 1
Enclosure 2
Switch 1
Enclosure 3
Switch 1
Enclosure 4
Switch 1
Enclosure 1 Enclosure 2 Enclosure 3 Enclosure 4
OA OA OA OA
May be any managed
GigE switch supported
by CP
Enclosure 1
Enclosure 2
Enclosure 3 Enclosure 4
Server Blades Server Blades Server Blades Server Blades
NIC1
NIC1
NIC1
NIC1
ProCurve managed
GigE Switch
Enclosure 4
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Enclosure 3
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Enclosure 2
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Enclosure 1
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Medium Sized HP XC Cluster of Server Blades
Figure 2-2 (page 36) provides two illustrations of a medium sized HP XC cluster of 32 enclosures
and a maximum of 512 nodes.
The top portion shows a Gigabit Ethernet switch (a ProCurve 2848) that is connected to the
enclosure switch in bay 1 of each enclosure as well as to a ProCurve 2650 switch that connects
to the Onboard Administrator external link of each enclosure.
The bottom portion provides some additional detail. It shows the ProCurve managed Gigabit
Ethernet switch connected to the Gigabit Ethernet switch of each enclosure and to a ProCurve
2650 switch, which connects the Primary Onboard Administrator External Link of each enclosure.
2.2 Network Overview
35
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-2 Interconnection Diagram for a Medium Sized HP XC Cluster of Server Blades
GigE Switch
Enclosure 1
Switch 1
Enclosure 2
Switch 1
Enclosure 32
Switch 1
ProCurve 2650
. . .
. . .
Enclosure 1
Server Blades
NIC1
Enclosure 2
Server Blades
NIC1
Enclosure 3
Server Blades
NIC1
Enclosure 1
OA
Ext Link
Enclosure 32
OA
Ext Link
Enclosure 2
OA
Ext Link
May be any
ProCurve managed
GigE switch
(smaller systems may use
ProCurve 2824)
Smaller systems
may use
ProCurve 2626
ProCurve 2848
Enclosure 32
GigE switch
ProCurve 2650
Enclosure 32
Enclosure 4
OA
Secondary OA Ext Link Primary OA Ext Link
Ext Link
GigE Switch
Enclosure 3
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Enclosure 2
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Enclosure 1
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
Large HP XC Cluster of Server Blades
Figure 2-3 (page 37) illustrates a large HP XC cluster of eight regions, with 32 enclosures per
region.
There is a Gigabit Ethernet switch for the HP XC system that is connected to a Gigabit Ethernet
switch in each region.
The Gigabit Ethernet switch in a region is connected to each enclosure's Gigabit Ethernet switch
in bay 1 and to a ProCurve 2650 switch. The ProCurve 2650 switch is connected to the Primary
Onboard Administrator External Link of each enclosure.
36
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-3 Interconnection Diagram for a Large HP XC Cluster of Server Blades
GigE Switch
/4
/4
Region 1
Region 8
GigE Switch
GigE Switch
. . .
. . .
Enclosure 32
Enclosure 512
ProCurve 2650
ProCurve 2650
GigE
GigE
Switch 1
Switch 1
Enclosure 1
Enclosure 481
Secondary OA Ext Link Primary OA Ext Link
Secondary OA Ext Link Primary OA Ext Link
Enclosure 32
OA
Ext Link
Enclosure 512
OA
Ext Link
GigE Switch
GigE Switch
. . .
Enclosure 2
Enclosure 482
Secondary OA Ext Link Primary OA Ext Link
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
GigE Switch
Enclosure 3
Enclosure 483
Secondary OA Ext Link Primary OA Ext Link
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
GigE Switch
Enclosure 4
Enclosure 484
Secondary OA Ext Link Primary OA Ext Link
Secondary OA Ext Link Primary OA Ext Link
GigE Switch
GigE Switch
2.3 Cabling for the Administration Network
For server blades, the administration network is created and connected through ProCurve model
2800 series switches. One switch is designated as the root administration switch and that switch
can be connected to multiple branch administration switches, if required.
NIC1 on each server blade is dedicated as the connection to the administration network. NIC1
of all server blades connects to interconnect bay 1 on the enclosure.
The entire administration network is formed by connecting the device (either a switch or a
pass-thru module) in interconnect bay 1 of each enclosure to one of the ProCurve administration
network switches.
Non-blade server nodes must also be connected to the administration network. See Chapter 3
(page 45) to determine which port on the node is used for the administration network; the port
you use depends on your particular hardware model.
Figure 2-4 illustrates the connections that form the administration network.
2.3 Cabling for the Administration Network
37
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-4 Administration Network Connections
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
PCI SLOT
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.4 Cabling for the Console Network
The console network is part of the private administration network within an HP XC system, and
it is used primarily for managing and monitoring the node consoles.
On a small cluster, the console management devices can share a single top-level ProCurve 2800
root administration switch. On larger hardware configurations that require more ports, the
console network is formed with separate ProCurve model 2600 series switches.
You arrange these switches in a hierarchy similar to the administration network. One switch is
designated as the root console switch and that switch can be connected to multiple branch console
switches. The top-level root console switch is then connected to the root administration switch.
HP server blades use iLO2 as the console management device. Each iLO2 in an enclosure connects
to the Onboard Administrator. To form the console network, connect the Onboard Administrator
of each enclosure to one of the ProCurve console switches.
Non-blade server nodes must also be connected to the console network. See Chapter 3 (page 45)
to determine which port on the node is used for the console network; the port you use depends
on your particular hardware model.
Figure 2-5 illustrates the connections that form the console network.
38
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-5 Console Network Connections
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
PCI SLOT
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.5 Cabling for the Interconnect Network
The interconnect network is a private network within an HP XC system. Typically, every node
in an HP XC system is connected to the interconnect. The interconnect network provides a
high-speed communications path used primarily for user file service and for communications
within applications that are distributed among nodes in the cluster.
Gigabit Ethernet and InfiniBand are supported as the interconnect types for HP XC hardware
configurations with server blades and enclosures. The procedure to configure the interconnect
network depends upon the type of interconnect in use.
•
•
•
2.5.1 Configuring a Gigabit Ethernet Interconnect
A Gigabit Ethernet interconnect requires one or more external Ethernet switches to act as the
interconnect between the enclosures that make up the HP XC system.
On systems using a Gigabit Ethernet interconnect, one NIC on each server blade is dedicated as
the connection to the interconnect network. On a server blade, NIC2 is used for this purpose.
NIC2 of all server blades connects to interconnect bay 2 on the enclosure.
The entire interconnect network is formed by connecting the device (either a switch or a pass-thru
module) in interconnect bay 2 of each enclosure to one of the Gigabit Ethernet interconnect
switches.
If the device is a switch, the Gigabit uplink to the higher level ProCurve switch can be a single
wire or a trunked connection of 2, 4, or 8 wires. If the device is a pass-thru module, there must
be one uplink connection for each server blade in the enclosure.
Non-blade server nodes must also be connected to the interconnect network. See Chapter 3
(page 45) to determine which port on the node is used for the interconnect network; the port
you use depends on your particular hardware model.
Figure 2-6 illustrates the connections for a Gigabit Ethernet interconnect.
2.5 Cabling for the Interconnect Network
39
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-6 Gigabit Ethernet Interconnect Connections
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
PCI SLOT
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.5.2 Configuring an InfiniBand Interconnect
An InfiniBand interconnect requires one or more external InfiniBand switches with at least one
managed switch to manage the fabric.
Systems using an InfiniBand interconnect require you to install an InfiniBand mezzanine card
into mezzanine bay 2 of each server blade to provide a connection to the InfiniBand interconnect
network. The InfiniBand card in mezzanine bay 2 connects to the double-wide InfiniBand switch
in interconnect bays 5 and 6 on the enclosure.
The entire interconnect network is formed by connecting the InfiniBand switches in interconnect
bays 5 and 6 of each enclosure to one of the InfiniBand interconnect switches.
Non-blade server nodes also require InfiniBand cards and must also be connected to the
interconnect network.
Figure 2-7 illustrates the connections for an InfiniBand interconnect.
Figure 2-7 InfiniBand Interconnect Connections
PCI SLOT
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
NIC
NIC
NIC
NIC
MGT
MGT
InfiniBand PCI Cards
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
NIC 1
NIC 2
NIC 3
NIC 4
InfiniBand
Mezzanine
Cards
MEZZ 1
MEZZ 2
MEZZ 3
Interconnect bays
5 & 6 (double wide)
Infiniband Interconnect Switch
External Public Network
iLO2
NIC 1
NIC 2
Interconnect bay 7
Interconnect bay 8
Double wide
InfiniBand switch
module
MEZZ 1
MEZZ 2
ONBOARD
ADMINISTRATOR
iLO2
40
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
2.5.3 Configuring the Interconnect Network Over the Administration Network
In cases where an additional Gigabit Ethernet port or switch may not be available, the HP XC
System Software enables you to configure the interconnect on the administration network. When
the interconnect is configured on the administration network, only a single LAN is used.
To configure the interconnect on the administration network, include the --ic=AdminNet
option on the discovercommand line, which is documented in the HP XC System Software
Installation Guide.
Be aware that configuring the interconnect on the administration network may negatively impact
system performance.
2.6 Cabling for the External Network
Depending upon the roles you assign to nodes during the cluster configuration process, some
nodes might require connections to an external public network. Making these connections requires
one or more Ethernet ports in addition to the ports already in use. The ports you use depend
upon the hardware configuration and the number of available ports.
On non-blade server nodes, the appropriate port assignments for the external network are shown
On a server blade, the number of available Ethernet ports is influenced by the type of interconnect
and the server blade height:
•
Nodes in clusters that use an InfiniBand interconnect have only one NIC in use for the
administration network.
•
Nodes in clusters that use a Gigabit Ethernet interconnect have two NICs in use; one for the
administration network, and one for the interconnect network.
•
•
Half-height server blade models have two built-in NICs.
Full-height server blade models have four built-in NICs.
You can use the built-in NICs on a server blade if any are available. If the node requires more
ports, you must add an Ethernet card to mezzanine bay 1 on the server blade. If you add an
Ethernet card to mezzanine bay 1, you must also add an Ethernet interconnect module (either a
switch or pass-thru module) to interconnect bay 3 or 4 of the c7000 enclosure.
On full-height server blades, you can avoid having to purchase an additional mezzanine card
and interconnect module by creating virtual local area networks (VLANs). On a full-height server
blade, NICs 1 and 3 are both connected to interconnect bay 1, and, for the c7000 enclosure, NICs
2 and 4 are both connected to interconnect bay 2. If you are using one of these NICs for the
connection to the external network, you might have to create a VLAN on the switch in that bay
to separate the external network from other network traffic.
The ports and interconnect bays used for external network connections vary depending on the
hardware configuration, the ports that are already being used for the other networks, and the
server blade height. For more information about how to configure the external network in these
various configurations, see the illustrations in the following sections:
•
•
•
2.6.1 Configuring the External Network: Option 1
Figure 2-8 (page 42) assumes that NIC1 and NIC2 are already in use for the administration and
interconnect networks. This situation requires a third NIC for the external network. Half-height
2.6 Cabling for the External Network
41
Download from Www.Somanuals.com. All Manuals Search And Download.
server blades do not have three NICs, and therefore, half-height server blades are not included
in this example
Because NIC1 and NIC3 on a full-height server blade are connected to interconnect bay 1, you
must use VLANs on the switch in that bay to separate the external network from the
administration network.
Also, in this example, PCI Ethernet cards are used in the non-blade server nodes. If the hardware
port to use for the external network.
Figure 2-8 External Network Connections: Full-Height Server Blades and NIC1 and NIC2 in Use
PCI SLOT
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
NIC
NIC
NIC
NIC
MGT
MGT
Ethernet PCI Cards
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
NIC 1
ADMIN VLAN
EXTERNAL VLAN
NIC 2
NIC 3
NIC 4
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.6.2 Configuring the External Network: Option 2
Figure 2-9 (page 43) assumes that NIC1 and NIC2 are already in use for the administration and
interconnect networks. This situation requires a third NIC for the external network, but unlike
Figure 2-8 (page 42), this hardware configuration includes half-height server blades. Therefore,
to make another Ethernet NIC available, you must add an Ethernet card to mezzanine bay 1 on
each server blade that requires an external connection. You must also install an Ethernet
interconnect module in interconnect bay 3 for these cards.
In addition, PCI Ethernet Cards are used in the non-blade server nodes. If the hardware
port to use for the external network.
42
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-9 External Network Connections: Half-Height Server Blades and NIC1 and NIC2 in Use
PCI SLOT
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
NIC
NIC
NIC
NIC
MGT
MGT
Ethernet PCI Cards
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
NIC 1
Ethernet
Mezzanine
Cards
NIC 2
NIC 3
NIC 4
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.6.3 Configuring the External Network: Option 3 - Non Gigabit Ethernet Interconnect
Clusters
The administration network requires only one network interface, NIC1, on clusters that do not
use Gigabit Ethernet as the interconnect (that is, they use InfiniBand or the interconnect on the
administration network).
On these non Gigabit Ethernet interconnect clusters, you have two configuration methods to
configure an external network connection, and the option you choose depends on whether the
collection of nodes requiring external connections includes half-height server blades.
•
If only full height server blades require external connections, you can use NIC3 for the
external network. This is similar to the way the external connection is configured in Figure 2-8
(page 42), and it saves the cost of an additional interconnect device in bay 2.
•
If half-height server blades require external connections, you cannot use NIC3 because half
height server blades do not have a third NIC. In this case, you must use NIC2 as the external
interconnect module to be present in bay 2.
Figure 2-8 (page 42) also shows the use of built-in NICs in the non-blade server nodes for the
external connection, but this varies by hardware model.
information about which port to use for the external network.
2.6 Cabling for the External Network
43
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 2-10 External Network Connections: Half and Full-Height Server Blades and NIC1 in Use
PCI SLOT
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
2.6.4 Creating VLANs
Use the following procedure on GbE2c (Nortel) switches if you need to configure a VLAN to
separate the external network from other network traffic.
1. See the illustrations of interconnect bay port mapping connections in the HP BladeSystem
Onboard Administrator User Guide to determine which ports on the switch to connect to each
of the two virtual networks. Remember to include at least one of the externally accessible
ports in each VLAN.
2. Connect a serial device to the serial console port of the GbE2c switch.
3. Press the Enter key.
4. When you are prompted for a password, enter admin, which is the default password.
5. Enter the following commands to access the VLAN configuration:
a. cfg
b. l2(the letter las in layer, not the number one)
c. vlan 2(be sure to enter a space between vlanand the vlan number)
6. Specify a name for the VLAN; choose any name you want.
# name your_name
7. Enable the VLAN:
# ena
8. Add each port to the VLAN one at a time. If you see a message that the port is in another
VLAN, answer yes to move it. This example adds ports 1, 3, and 21 to the VLAN.
# add 1
# add 3
# add 21
If you need more information about creating VLANs, see the GbE2c documentation.
9. When you have completed adding ports, enter applyto activate your changes and enter
saveto save them.
44
Cabling Server Blades
Download from Www.Somanuals.com. All Manuals Search And Download.
3 Making Node and Switch Connections
This chapter provides information about the connections between nodes and switches that are
required for an HP XC system.
The following topics are addressed:
•
•
•
•
IMPORTANT: The specific node and switch port connections documented in this chapter do
not apply to hardware configurations containing HP server blades and enclosures. For information
3.1 Cabinets
Cabinets are used as a packaging medium. The HP XC system hardware is contained in two
types of cabinets:
•
Application cabinets
The application cabinets contain the compute nodes and are optimized to meet power, heat,
and density requirements. All nodes in an application cabinet are connected to the local
branch switch.
•
Utility cabinets
The utility cabinet is intended to fill a more flexible need. In all configurations, at a minimum,
the utility cabinet contains the head node. Nodes with external storage and nodes that are
providing services to the cluster (called service nodes or utility nodes) are also contained in
the utility cabinet. All nodes in the utility cabinet are connected to the root switches
(administration and console).
Figure 3-1 illustrates the relationship between application cabinets, utility cabinets, and the Root
Figure 3-1 Application and Utility Cabinets
Root Administration Switch
Utility Cabinet
Application
Cabinet
Application
Cabinet
Application
Cabinet
Application
Cabinet
3.2 Trunking and Switch Choices
The HP XC System Software supports the use of port trunking (that is, the use of multiple network
ports in parallel to increase link speed faster than any one single port) on the ProCurve switches
3.1 Cabinets
45
Download from Www.Somanuals.com. All Manuals Search And Download.
to create a higher bandwidth connection between the Root Administration Switches and the
Branch Administration Switches.
For physically small hardware models (such as a 1U HP ProLiant DL145 server), a large number
of servers (more than 30) can be placed in a single cabinet, and are all attached to a single branch
switch. The branch switch is a ProCurve Switch 2848, and two-port trunking is used for the
connection between the Branch Administration Switch and the Root Administration Switch.
For physically larger hardware models (2U and larger) such the HP Integrity rx2600 and HP
ProLiant DL585 servers, a smaller number of servers can be placed in a single cabinet. In this
case, the branch switch is a ProCurve Switch 2824, which is sufficient to support up to 19 nodes.
In this release, the HP XC System Software supports the use of multiwire connections or trunks
between switches in the system. In a large-scale system (one that has regions and uses a super
root switch), you can use a one-wire to four-wire trunk between the super root switch and the
root administration switch for each of the HP XC regions. On a smaller-scale HP XC system or
within a single region, one-wire and two-wire trunks are supported for connection between the
root administration switch and the branch administration switches.
You must configure trunks on both switches before plugging the cables in between the switches.
Otherwise, a loop is created between the switches, and the network is rendered useless.
Trunking configurations on switches must follow these guidelines:
•
Because of the architecture of the ProCurve switch, the HP XC System Software uses only
10 ports of each 12-port segment to ensure maximum bandwidth through the switch; the
last two ports are not used.
•
Trunk groups must be contiguous.
Thus, by adhering to the trunking guidelines, the following ports are used to configure a ProCurve
2848 Super Root Switch for three regions using four-wire trunks:
•
•
•
•
Region 1 - Ports 1, 2, 3, 4
Region 2 - Ports 5, 6, 7, 8
Region 3 - Ports 13, 14, 15, 16
Ports 9, 10, 11, and 12 are not used
3.3 Switches
The following topics are addressed in this section:
•
•
•
•
•
•
•
•
•
3.3.1 Specialized Switch Use
The following describes the specialized uses of switches in an HP XC system.
Super Root Switch
This switch is the top level switch in a large-scale system,
that is, an HP XC system with more than 512 nodes
requiring more than one Root Administration switch.
Root Administration switches are connected directly to
this switch.
46
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
Root Administration Switch
This switch connects directly to Gigabit Ethernet ports
of the head node, the Root Console Switch, Branch
Administration Switches, and other nodes in the utility
cabinet.
Root Console Switch
This switch connects to the Root Administration Switch,
Branch Console Switches, and connects to the
management console ports of nodes in the utility cabinet.
Branch Administration Switch
Branch Console Switch
This switch connects to the Gigabit Ethernet ports of
compute nodes and connects to the Root Administration
Switch.
This switch connects to the Root Console Switch and
connects to the management console ports of the compute
nodes.
IMPORTANT: Switch use is not strictly enforced on HP XC systems with HP server blades.
Table 3-1 lists the switch models that are supported for each use.
Table 3-1 Supported Switch Models
Switch Use
ProCurve Switch Model
ProCurve 2848 or 2824
ProCurve 2650 or 2626
Administration Switch
Console Switch
3.3.2 Administrator Passwords on ProCurve Switches
The documentation that came with the ProCurve switch describes how to optionally set an
administrator's password for the switch.
If you define and set a password on a ProCurve switch, you must set the same password on
every ProCurve switch that is a component of the HP XC system.
During the hardware discovery phase of the system configuration process, you are prompted to
supply the password for the ProCurve switch administrator, and the password on every switch
must match.
3.3.3 Switch Port Connections
Most HP XC systems have at least one Root Administration Switch and one Root Console Switch.
The number of Branch Administration Switches and Branch Console Switches depends upon
the total number of nodes in the hardware configuration.
The administration network using the root and branch switches must be parallel to the console
network root and branch switches. In other words, if a particular node uses port N on the Root
Administration Switch, its management console port must be connected to port N on the Root
Console Switch. If a particular node uses port N on the Branch Administration Switch, its
management console port must be connected to port N on the corresponding Branch Console
Switch.
3.3 Switches
47
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 3-2 Node and Switch Connections on a Typical System
Specialized Role Nodes
Head
Node
Administration
Switches
Console
Switches
Root
Root
Administration
Console
Branch Switches
Branch Switches
Compute Nodes
Figure 3-3 shows a graphical representation of the logical layout of the switches and nodes in a
large-scale system with a Super Root Switch. The head node connects to Port 42 on the Root
Administration Switch in Region 1.
Figure 3-3 Switch Connections for a Large-Scale System
ProCurve 2848
Super Root
Switch
Ports 3 - 45
48
1
2
46
To Region 2
ProCurve 2848
Ports 3 - 45
ProCurve 2650
Ports 3 - 47
48
Root Admin
Switch
Root Console
Switch
48
50
50
1
1
2
46
1
1
2
To Next Switch
To Next Switch
ProCurve 2650
Ports 3 - 47
ProCurve 2848
Ports 3 - 45
Branch Admin
Switch
Branch Console
Switch
48
2
46
2
48
CP
Port
CP
Port
Ethernet
Ethernet
Node 1
Node 2
Region 1
48
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
3.3.3.1 Switch Connections and HP Workstations
HP model xw workstations do not have console ports. Only the Root Administration Switch
supports mixing nodes without console management ports with nodes that have console
management ports (that is, all other supported server models).
HP workstations connected to the Root Administration Switch must be connected to the next
lower-numbered contiguous set of ports immediately below the nodes that have console
management ports.
For example, if nodes with console management ports are connected to ports 42 through 36 on
the Root Administration Switch, the console ports are connected to ports 42 through 36 on the
Console Switch. Workstations must be connected starting at port 35 and lower to the Root
Administration Switch; the corresponding ports on the Console Switch are empty.
3.3.4 Super Root Switch
Figure 3-4 shows the Super Root Switch, which is a ProCurve 2848. A Super Root switch
configuration supports the use of trunking to expand the bandwidth of the connection between
the Root Administration Switch and the Super Root Switch. The connection can be as simple as
information about trunking and the Super Root Switch.
You must configure trunks on both switches before plugging in the cables between the switches.
Otherwise, a loop is created between the two switches.
Figure 3-4 illustrates a ProCurve 2848 Super Root Switch.
Figure 3-4 ProCurve 2848 Super Root Switch
Ports 1, 3, 5, and 7
are the first four ports
located on the top row
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
hp procurve
switch 2848
J4904A
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
1
15
17
31
33
Lnk
RPS
LED
Mode
M
M
M
M
45
46
47
48
T
T
T
T
Ac
t
Fan
Po
w
er
FD
Sp
x
d
Test
Fa ult
16
18
32
34
Spd
m
ode
:
of
f
=
1
0
Mbps
fla
s
h
=
1
0
0
Mbps
o
n
=
1
0
0
0
Mbps
Re se
t
Cl
e
a
r
!
Us
e
o
nl
y
o
ne (T
o
r
M) fo
r
e
a
ch
G
igabit port
Gigabit
Ethernet Ports
Ports 2, 4, 6, and 8
are the first four ports
located on the bottom row
10/100/1000 Base-TX RJ-45 Ports
Table 3-2 shows how ports are allocated for large-scale systems with multiple regions.
Table 3-2 Trunking Port Use on Large-Scale Systems with Multiple Regions
Ports Used on Root Administration
Trunking Type
4-wire Trunking:
Region 1
Ports Used on Super Root Switch
Switch
1 through 4
5 through 8
13 through 16
43 through 46
43 through 46
43 through 46
Region 2
Region 3
2-wire Trunking:
Region 1
1 and 2
3 and 4
5 and 6
7 and 8
9 and 10
13 and 14
45 and 46
45 and 46
45 and 46
45 and 46
45 and 46
45 and 46
Region 2
Region 3
Region 4
Region 5
Region 6
3.3 Switches
49
Download from Www.Somanuals.com. All Manuals Search And Download.
3.3.5 Root Administration Switch
The Root Administration Switch for the administration network of an HP XC system can be
either a ProCurve 2848 switch or a ProCurve 2824 switch for small configurations.
If you are using a ProCurve 2848 switch as the switch at the center of the administration network,
have connections, black ports can have connections, and ports with numbered callouts are used
for specific purposes, described after the figure. Gray-colored ports are reserved for future use.
Figure 3-5 ProCurve 2848 Root Administration Switch
2
4
3
Connections to Node Administration Ports
Begin at Port 41 (Descending)
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
hp procurve
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
1
15
17
31
33
switch 2848
J4904A
Lnk
R
PS
LE
Mode
D
M
M
M
M
45
46
47
48
T
T
T
T
Ac
FD
t
Fan
Po
w
er
x
Sp
d
Test
Fa ult
16
18
32
34
Spd
m
ode
:
of
f
=
1
0
Mbps
flash
=
10
0
Mbps
on
=
10
0
0
Mbps
Re
set
Cl ea
r
Us
e
onl
y
one (T or M) for each Gigabit port
10/100/1000 Base-TX RJ-45 Ports
Uplinks from Branches
Begin at Port 1 (Ascending)
Gigabit Ethernet
Ports
The callouts in the figure enumerate the following:
1. Port 42 must be used for the administration port of the head node.
2. Ports 43 through 46 are used for connecting to the Super Root Switch if you are configuring
a large-scale system.
3. Port 47 can be one of the following:
•
•
Connection (or line monitoring card) for the interconnect.
Connection to the Interconnect Ethernet Switch (IES), which connects to the management
port of multiple interconnect switches.
4. Port 48 is used for the interconnect to the Root Console Switch (ProCurve 2650 or ProCurve
2626).
The ports on this switch must be allocated as follows for maximum performance:
•
Ports 1–10, 13–22, 25–34, 37–42
—
Starting with port 1, the ports are used for links from Branch Administration Switches,
which includes the use of trunking. Two-port trunking can be used for each Branch
Administration Switch.
NOTE: Trunking is restricted to within the same group of 10 (you cannot trunk with
ports 10 and 13). HP recommends that all trunking use consecutive ports within the
same group (1–10, 13–22, 25–34, or 37–42).
—
Starting with port 41 and in descending order, ports are assigned for use by individual
nodes.
•
Ports 11, 12, 23, 24, 35, 36 are unused.
For size-limited configurations, the ProCurve 2824 switch is an alternative Root Administration
Switch.
If you are using a ProCurve 2824 switch as the switch at the center of the administration network,
have connections, black ports can have connections, and ports with numbered callouts are used
for specific purposes, described after the figure.
50
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 3-6 ProCurve 2824 Root Administration Switch
3
4
6
1
2
1
4
6
8
10
12
14
16
18
20
19
hp procurve
switch 2824
3
5
7
9
11
13
15
17
J4903
A
Ln
k
t
St atus
21
22
23
24
LE
Mode
D
RP
S
Ac
Po wer
Fa ult
Fa
n
FD
Sp
x
T
M
T
M
T
M
T
M
Te
s
t
d
R
eset
C
lear
C
ons o le
2
5
The callouts in the figure enumerate the following:
1. Uplinks from branches start at port 1 (ascending).
2. 10/100/1000 Base-TX RJ-45 ports.
3. Connections to node administration ports start at port 21 (descending).
4. Port 22 is used for the administration port of the head node.
5. Dual personality ports.
6. Port 24 is used as the interconnect to the Root Console Switch (a ProCurve 2650 or ProCurve
2626 model switch).
As a result of performance considerations and given the number of ports available in the ProCurve
2824 switch, the allocation order of ports is:
•
Ports 1–10, 13–21
—
Starting with port 1, the ports are used for links from Branch Administration Switches
which can include the use of trunking. For example, if two port trunking is used, the
first Branch Administration Switch uses port 1 and 2 of the Root Administration Switch.
NOTE: Trunking is restricted to within the same group of 10 (you cannot trunk with
ports 10 and 13). HP recommends that all trunking use consecutive ports within the
same group (1–10 or 13–21).
—
Starting with port 21 and descending, ports are assigned for use by individual root
nodes. A root node is a node that is connected directly to the Root Administration
Switch.
•
•
Ports 11 and 12 are unused.
Port 23 can be one of the following:
—
—
Console (or line monitoring card) for the interconnect
Connection to the Interconnect Ethernet Switch (IES), which connects to the management
port of multiple interconnect switches.
•
Port 24 is used as the interconnect to the Root Console Switch
3.3.6 Root Console Switches
The following switches are supported as Root Console Switches for the console branch of the
administration network:
•
•
•
•
3.3 Switches
51
Download from Www.Somanuals.com. All Manuals Search And Download.
3.3.6.1 ProCurve 2650 Switch
You can use a ProCurve 2650 switch as a Root Console Switch for the console branch of the
administration network. The console branch functions at a lower speed (10/100 Mbps) than the
rest of the administration network.
connections, black ports can have connections, and ports with numbered callouts are used for
specific purposes, described after the figure.
Figure 3-7 ProCurve 2650 Root Console Switch
Uplinks from Branches
Start at Port 1 (Ascending)
Connections to Node Console Ports
Start at Port 41 (Descending)
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
hp pr oc ur ve
swi tch 26
J4
Sel
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
1
15
17
31
33
47
5
0
Gi
Po
g
r
-T
ts
8
9
9
A
f
t
Te
s
Lnk
Ac
Po
LED
Vi ew
r
t
M
M
49
50
T
T
t
Po
w
er
Fa
n
FD
Sp
x
Mi ni-
GB IC
Statu
s
d
Po
r
ts
16
18
32
34
48
Fa ult
Spd
m
ode
:
of
f
=
1
0
Mbp s,
fla
s
h
=
1
0
0
Mbps
,
o
n
=
1
0
0
0
Mbps
10/1 00Ba se -T
X
Port
s
(1
-
48)
Re set
Cl
e
a
r
!
Us
e
o
nl
y
o
ne (T
o
r
M) fo
r
ea ch
G
ig abit po
r
t
Gigabit
Ethernet Ports
10/100Base-TX RJ-45 Ports
The callouts in the figure enumerate the following:
1. Port 42 must be reserved for an optional connection to the console port on the head node.
2. Port 49 is reserved.
3. Port 50 is the Gigabit Ethernet link to the Root Administration Switch.
Allocate the ports on this switch for consistency with the administration switches, as follows:
•
Ports 1–10, 13–22, 25–34, 37–41
—
Starting with port 1, the ports are used for links from Branch Console Switches. Trunking
is not used.
—
Starting with port 41 and in descending order, ports are assigned for use by individual
nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the
Root Administration Switch.
NOTE: There must be at least one idle port in this set to indicate the dividing line
between branch links and root node administration ports.
•
Ports 11, 12, 23, 24, 35, 36, and 43–48 are unused.
3.3.6.2 ProCurve 2610-48 Switch
You can use a ProCurve 2610-48 switch as a Root Console Switch for the console branch of the
administration network. The console branch functions at a lower speed (10/100 Mbps) than the
rest of the administration network.
connections, black ports can have connections, and ports with numbered callouts are used for
specific purposes, described after the figure.
52
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 3-8 ProCurve 2610-48 Root Console Switch
The callouts in the figure enumerate the following:
1. Port 42 must be reserved for an optional connection to the console port on the head node.
2. Port 49 is reserved.
3. Port 50 is the Gigabit Ethernet link to the Root Administration Switch.
Allocate the ports on this switch for consistency with the administration switches, as follows:
•
Ports 1–10, 13–22, 25–34, 37–41
—
Starting with port 1, the ports are used for links from Branch Console Switches. Trunking
is not used.
—
Starting with port 41 and in descending order, ports are assigned for use by individual
nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the
Root Administration Switch.
NOTE: There must be at least one idle port in this set to indicate the dividing line
between branch links and root node administration ports.
•
Ports 11, 12, 23, 24, 35, 36, and 43–48 are unused.
3.3.6.3 ProCurve 2626 Switch
You can use a ProCurve 2626 switch as a Root Console Switch for the console branch of the
ports should not have connections, black ports can have connections, and ports with numbered
callouts are used for specific purposes, described after the figure.
Figure 3-9 ProCurve 2626 Root Console Switch
Uplinks from Branches
Start at port 1 (Ascending)
Connections to Node Console Ports
Start at port 21 (Descending)
hp procurve
switch 2626
J4900A
2
4
6
8
10
12
14
16
18
20
22
24
23
1
3
5
7
9
11
13
15
17
19
21
Gig-T
Ports
Self
T
e
st
25
M
26
M
Lnk
Act
T
T
LED
Mode
Power
Fault
Mini-
GBIC
Ports
Fan
Status
FDx
Spd
Reset
Clear
Use only one (Tor M) for Gigabit port
Spd Mode
off
=
10Mbps
flash
=
100Mbps
on
=
1000Mbps
Gigabit
Ethernet
Ports
10/100Base-TX RJ-45 Ports
The callouts in the figure enumerate the following:
1. Port 22 must be reserved for an optional connection to the console port on the head node.
2. Port 25 is reserved.
3. Port 26 is the Gigabit Ethernet link to the Root Administration Switch.
Allocate the ports on this switch for consistency with the administration switches, as follows:
•
Ports 1–10, 13–21
3.3 Switches
53
Download from Www.Somanuals.com. All Manuals Search And Download.
—
—
Starting with port 1, the ports are used for links from Branch Console Switches. Trunking
is not used.
Starting with port 21 and in descending order, ports are assigned for use by individual
nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the
Root Administration Switch.
NOTE: There must be at least one idle port in this set to indicate the dividing line
between branch links and root node administration ports.
•
Ports 11, 12, 23, and 24, are unused.
3.3.6.4 ProCurve 2610-24 Switch
You can use a ProCurve 2610-24 switch as a Root Console Switch for the console branch of the
ports should not have connections, black ports can have connections, and ports with numbered
callouts are used for specific purposes, described after the figure.
Figure 3-10 ProCurve 2610-24 Root Console Switch
The callouts in the figure enumerate the following:
1. Port 22 must be reserved for an optional connection to the console port on the head node.
2. Port 25 is reserved.
3. Port 26 is the Gigabit Ethernet link to the Root Administration Switch.
Allocate the ports on this switch for consistency with the administration switches, as follows:
•
Ports 1–10, 13–21
—
Starting with port 1, the ports are used for links from Branch Console Switches. Trunking
is not used.
—
Starting with port 21 and in descending order, ports are assigned for use by individual
nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the
Root Administration Switch.
NOTE: There must be at least one idle port in this set to indicate the dividing line
between branch links and root node administration ports.
•
Ports 11, 12, 23, and 24, are unused.
3.3.7 Branch Administration Switches
The Branch Administration Switch of an HP XC system can be either a ProCurve 2848 switch or
a ProCurve 2824 switch.
Figure 3-11 shows the ProCurve 2848 switch. In the figure, white ports should not have
connections, black ports can have connections, and ports with numbered callouts are used for
specific purposes, described after the figure.
54
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 3-11 ProCurve 2848 Branch Administration Switch
2
Connections to Node Administration Ports
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
hp procurve
switch 2848
J4904A
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
1
15
17
31
33
Lnk
RPS
LED
Mode
M
M
M
M
45
46
47
48
T
T
T
T
Ac
t
Fan
Po
w
er
FD
Sp
x
d
Test
16
18
32
34
Fa ult
Spd
m
ode
:
of
f
=
1
0
Mbps
fla
s
h
=
1
0
0
Mbps
o
n
=
1
0
0
0
Mbps
Re set
Cl
e
a
r
!
Us
e
o
nl
y
o
ne (T
o
r
M) fo
r
ea ch
G
igabit port
10/100/1000 Base-TX RJ-45 Ports
Dual Personality Ports
The callouts in the figure enumerate the following:
1. Port 45 is used for the trunked link to the Root Administration Switch.
2. Port 46 is used for the trunked link to the Root Administration Switch.
Allocate the ports on this switch for maximum performance, as follows:
•
Ports 1–10, 13–22, 25–34, and 37–44 are used for the administration ports for the individual
nodes (up to 38 nodes).
•
Ports 11, 12, 23, 24, 35, 36, 47, and 48 are unused.
connections, black ports can have connections, and ports with numbered callouts are used for a
specific purposes, described after the figure.
Figure 3-12 ProCurve 2824 Branch Administration Switch
2
4
6
8
10
12
14
16
18
20
19
hp procurve
switch 2824
J4903A
1
3
5
7
9
11
13
15
17
Lnk
Act
Status
RPS
21
22
23
24
LED
Mode
Power
Fault
Fan
FDx
Spd
T
M
T
M
T
M
T
M
Test
Reset Clear
Console
T
T
M
M
T
T
M
M
Dual Personality Ports
10/100/1000 Base-TX RJ-45 Ports
The callout in the figure enumerates the following:
1. Port 22 is used for the link to the Root Administration Switch.
Allocate the ports on this switch for maximum performance, as follows:
•
Ports 1–10 and 13–21 are used for the administration ports for the individual nodes (up to
19 nodes).
•
Ports 11, 12, 23, and 24 are unused.
3.3.8 Branch Console Switches
The Branch Console Switch of an HP XC system is a ProCurve 2650 or ProCurve 2610-48 switch.
The connections to the ports must parallel the connections of the corresponding Branch
Administration Switch. If a particular node uses port N on a Branch Administration Switch, its
management console port must be connected to port N on the corresponding Branch Console
Switch.
In each figure, white ports should not have connections, black ports can have connections, and
ports with numbered callouts are used for a specific purpose, described after the figures.
3.3 Switches
55
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 3-13 ProCurve 2650 Branch Console Switch
Connections to Node Console Ports
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
hp pr oc ur ve
swi tch 26
J4
Sel
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
1
15
17
31
33
47
5
0
Gi
Po
g
r
-T
ts
8
9
9
A
f
t
Te
s
Lnk
Ac
Po
LED
Vi ew
r
t
M
M
49
50
T
T
t
Po
w
er
Fa
n
FD
Sp
x
Mi ni-
GB IC
Statu
s
d
Po
r
ts
16
18
32
34
48
Fa ult
Spd
m
ode
:
of
f
=
1
0
Mbp s,
fla
s
h
=
1
0
0
Mbps
,
o
n
=
1
0
0
0
Mbps
10/100Ba se -T
X
Port
s
(1
-
48)
Re set
Cl
e
a
r
!
Us
e
o
nl
y
o
ne (T
o
r
M) fo
r
ea ch
G
ig abit po
r
t
Gigabit
Ethernet Ports
10/100Base-TX RJ-45 Ports
Figure 3-14 ProCurve 2610-48 Branch Console Switch
The callout in these figures enumerates the following:
1. Port 50 is the link to the Root Console Switch.
Allocate the ports on this switch for maximum performance, as follows:
•
Ports 1–10, 13–22, 25–34, 37–44 are used for the console ports of individual nodes (up to 38
nodes).
•
Ports 11, 12, 23, 24, 35, 36, 45–49 are unused
3.4 Interconnect Connections
The high-speed interconnect connects every node in the HP XC system. Each node can have an
interconnect card installed in the highest speed PCI slot. Check the hardware documentation to
determine which slot this is.
The interconnect switch console port (or monitoring line card) also connects to the Root
You must determine the absolute maximum number of nodes that could possibly be used with
the interconnect hardware that you have. This maximum number of ports on the interconnect
switch or switches (max-node) affects the naming of the nodes in the system. The documentation
that came with the interconnect hardware can help you find this number.
NOTE: You can choose a number smaller than the absolute maximum number of interconnect
ports for max-node, but you can not expand the system to a size larger than this number in the
future without completely rediscovering the system, thereby renumbering all nodes in the system.
This restriction does not apply to hardware configurations that contain HP server blades and
enclosures.
Specific considerations for connections to the interconnect based on interconnect type are discussed
in the following sections:
•
•
•
•
•
56
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
The method for wiring the administration network and interconnect networks allows expansion
of the system within the system's initial interconnect fabric without recabling of any existing
nodes. If additional switch chassis or ports are added to the system as part of the expansion,
some recabling may be necessary.
3.4.1 QsNet Interconnect Connections
For the QsNetII interconnect developed by Quadrics, it is important that nodes are connected to
the Quadrics switch ports in a specific order. The order is affected by the order of the
administration network and console network.
Because the Quadrics port numbers start numbering at 0, the highest port number on the Quadrics
switch is port max-node minus 1, where max-node is the maximum number of nodes possible
in the system. This is the port on the Quadrics switch to which the head node must be connected.
The head node in an HP XC system is always the node connected to the highest port number of
any node on the Root Administration Switch and the Root Console Switch.
NOTE: The head node port is not the highest port number on the Root Administration Switch.
Other higher port numbers are used to connect to other switches. If the Root Administration
Switch is a ProCurve 2848 switch, the head node is connected to port number 42, as discussed
If the Root Administration Switch is a ProCurve 2824 switch, the head node is connected to port
node should, however, be connected to the highest port number on the interconnect switch.
The next node connected directly to the root switches (Administration and Console) should have
connections to the Quadrics switch at the next highest port number on the Quadrics switch
(max-node minus 2). All nodes connected to the Root Administration Switch will be connected
to the next port in descending order.
Nodes attached to branch switches must be connected starting at the opposite end of the Quadrics
switch. The node attached to the first port of the first Branch Administration Switch should be
attached to the first port on the Quadrics switch (Port 0).
3.4.2 Gigabit Ethernet Interconnect Connections
The HP XC System Software is not concerned with the topology of the Gigabit Ethernet
interconnect, but it makes sense to structure it in parallel with the administration network in
order to make your connections easy to maintain.
Because the first logical Gigabit Ethernet port on each node is always used for connectivity to
the administration network, there must be a second Gigabit Ethernet port on each node if you
are using Gigabit Ethernet as the interconnect.
Depending upon the hardware model, the port can be built-in or can be an installed card. Any
node with an external interface must also have a third Ethernet connection of any kind to
communicate with external networks.
3.4.3 Administration Network Interconnect Connections
In cases where an additional Gigabit Ethernet port or switch may not be available, the HP XC
System Software allows the interconnect to be configured on the administration network. When
the interconnect is configured on the administration network, only a single LAN is used.
However, be aware that configuring the system in this way may negatively impact system
performance.
To configure the interconnect on the administration network, you include the --ic=AdminNet
option on the discovercommand line, which is documented in the HP XC System Software
Installation Guide.
3.4 Interconnect Connections
57
Download from Www.Somanuals.com. All Manuals Search And Download.
If you do not specify the --ic=AdminNetoption, the discovercommand attempts to locate
the highest speed interconnect on the system with the default being a Gigabit Ethernet network
that is separate from the administration network.
3.4.4 Myrinet Interconnect Connections
The supported Myrinet interconnects do not have the ordering requirements of the Quadrics
interconnect, but it makes sense to structure it in parallel with the other two networks in order
to make the connections easy to maintain and service.
3.4.5 InfiniBand Interconnect Connections
The supported InfiniBand interconnects do not have the ordering requirements of the Quadrics
interconnect, but it makes sense to structure it in parallel with the other two networks in order
to make the connections easy to maintain and service.
If you use a dual-ported InfiniBand host channel adapter (HCA), you must connect the IB cable
to the lowest-numbered port on the HCAs; it is labeled either Port 1or P1. This is necessary
so that the OpenFabrics Enterprise Distribution (OFED) driver activates the IP interface called
ib0instead of ib1.
58
Making Node and Switch Connections
Download from Www.Somanuals.com. All Manuals Search And Download.
4 Preparing Individual Nodes
This chapter describes how to prepare individual nodes in the HP XC hardware configuration.
The following topics are addressed:
•
•
•
•
•
•
•
•
•
•
4.1 Firmware Requirements and Dependencies
Before installing the HP XC System Software, verify that all hardware components are installed
with the minimum firmware versions listed in the master firmware list. You can find this list
from the following Web page:
Look in the associated hardware documentation for instructions about how to verify or upgrade
the firmware for each component.
Table 4-1 lists the firmware dependencies of individual hardware components in an HP XC
system.
Table 4-1 Firmware Dependencies
Hardware Component
CP3000
Firmware Dependency
HP ProLiant DL140 G2
HP ProLiant DL140 G3
HP ProLiant DL160 G5
HP ProLiant DL360 G4
HP ProLiant DL360 G4p
HP ProLiant DL360 G5
HP ProLiant DL380 G4
HP ProLiant DL380 G5
HP ProLiant DL580 G4
HP ProLiant DL580 G5
CP3000BL
Lights-out 100i management (LO-100i), system BIOS
LO-100i, system BIOS
LO-100i, system BIOS
Integrated lights out (iLO), system BIOS
iLO, system BIOS
iLO2, system BIOS
iLO, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
HP ProLiant BL2x220c G5
HP ProLiant BL260c G5
HP ProLiant BL460c
HP ProLiant BL480c
iLO2, system BIOS, Onboard Administrator (OA)
iLO2, system BIOS, Onboard Administrator (OA)
iLO2, system BIOS, Onboard Administrator (OA)
iLO2, system BIOS, OA
4.1 Firmware Requirements and Dependencies
59
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-1 Firmware Dependencies (continued)
Hardware Component
HP ProLiant BL680c G5
CP4000
Firmware Dependency
iLO2, system BIOS, Onboard Administrator (OA)
HP ProLiant DL145
HP ProLiant DL145 G2
HP ProLiant DL145 G3
HP ProLiant DL165 G5
HP ProLiant DL365
HP ProLiant DL365 G5
HP ProLiant DL385
HP ProLiant DL385 G2
HP ProLiant DL385 G5
HP ProLiant DL585
HP ProLiant DL585 G2
HP ProLiant DL585 G5
HP ProLiant DL785 G5
CP4000BL
LO-100i, system BIOS
LO-100i, system BIOS
LO-100i, system BIOS
LO-100i, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
iLO2, system BIOS
HP ProLiant BL465c
HP ProLiant BL465c G5
HP ProLiant BL685c
HP ProLiant BL685c G5
CP6000
iLO2, system BIOS, OA
iLO2, system BIOS, OA
iLO2, system BIOS, OA
iLO2, system BIOS, OA
HP Integrity rx1620
Management Processor (MP), BMC, Extensible Firmware Interface
(EFI)
HP Integrity rx2600
HP Integrity rx2620
HP Integrity rx2660
HP Integrity rx4640
HP Integrity rx8620
CP6000BL
MP, BMC, EFI, system
MP, BMC, EFI, system
MP, BMC, EFI, system
MP, BMC, EFI, system
MP, BMC, EFI, system
HP Integrity BL860c Server Blade (Full-height) MP, OA
Switches
ProCurve 2824 switch
ProCurve 2848 switch
ProCurve 2650 switch
ProCurve 2610 switch
ProCurve 2626 switch
Interconnect
Firmware version
Firmware version
Firmware version
Firmware version
Firmware version
60
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-1 Firmware Dependencies (continued)
Hardware Component
Myrinet
Firmware Dependency
Firmware version
Interface card version
Firmware version
Firmware version
Myrinet interface card
QsNetII
InfiniBand
4.2 Ethernet Port Connections on the Head Node
Table 4-2 lists the Ethernet port connections on the head node based on the type of interconnect
in use. Use this information to determine the appropriate connections for the external network
connection on the head node.
configurations with HP server c-Class blades and enclosures.
Table 4-2 Ethernet Ports on the Head Node
Gigabit Ethernet Interconnect
All Other Interconnect Types
• Physical onboard Port #1 is always the connection to • Physical onboard Port #1 is always the connection to
the administration network.
the administration network.
• Physical onboard Port #2 is the connection to the
• Physical onboard Port #2 is available for an external
connection if needed (except if the port is 10/100, then
it is unused).
interconnect.
• Add-on NIC card #1 is available as an external
connection.
• Add-on NIC card #1 is available for an external
connection if Port #2 is 10/100.
4.3 General Hardware Preparations for All Cluster Platforms
Make the following hardware preparations on all cluster platform types if you have not already
done so:
1. The connection of nodes to ProCurve switch ports is important for the automatic discovery
process. Ensure that all nodes are connected as described in “Making Node and Switch
2. When possible, ensure that switches are configured to obtain IP addresses using DHCP. For
more information on how to do this, see the documents that came with the ProCurve
hardware. ProCurve documents are also available at the following Web page:
IMPORTANT: Some HP Integrity hardware models must be configured with static addresses,
not DHCP. For HP XC systems with one or more of these hardware models, you must
configure all the nodes with static IP addresses rather than with DHCP. The automatic
discovery process requires that all nodes be configured with DHCP or with static IP addresses,
but not a combination of both methods.
3. Ensure that any nodes connected to a Lustre® file system server are on their own Gigabit
Ethernet switch.
4. Ensure that all hardware components are running the correct firmware version and that all
nodes in the system are at the same firmware version. See “Firmware Requirements and
Dependencies” (page 59) for more information.
5. Nagios is a component of the HP XC system that monitors sensor data and system event
logs. Ensure that the console port of the head node is connected to the external network so
4.2 Ethernet Port Connections on the Head Node
61
Download from Www.Somanuals.com. All Manuals Search And Download.
that it is accessible to Nagios during system operation. For more information on Nagios, see
the HP XC System Software Administration Guide.
6. Review the documentation that came with the hardware and have it available, if needed.
Depending upon the type of cluster platform, proceed to one of the following sections to prepare
individual nodes:
•
•
•
4.4 Setting the Onboard Administrator Password
If the hardware configuration contains server blades and enclosures, you must define and set
the user name and password for the Onboard Administrator on every enclosure in the hardware
configuration.
IMPORTANT: You cannot set the Onboard Administrator password until the head node is
installed and the switches are discovered. For more information on installing the head node and
discovering switches, see the HP XC System Software Installation Guide.
The Onboard Administrator user name and password must match the user name and password
you plan to use for the iLO2 console management devices. The default user name is
Administrator, and HP recommends that you delete the predefined Administratoruser
for security purposes.
If you are using the default user name Administrator, set the password to be the same as the
iLO2. If you create a new user name and password for the iLO2 devices, you must make the
same settings on all Onboard Administrators.
Follow this procedure to configure a common password for each active Onboard Administrator:
1. Use a network cable to plug in your PC or laptop to the administration network ProCurve
switch.
2. Make sure the laptop or PC is set for a DHCP network.
3. Gather the following information:
a. Look at the Insight Display panel on each enclosure, and record the IP address of the
Onboard Administrator.
b. Look at the tag affixed to each enclosure, and record the default Onboard Administrator
password shown on the tag.
4. On your PC or laptop, use the information gathered in the previous step to browse to the
Onboard Administrator for every enclosure, and set a common user name and password
for each one. This password must match the administrator password you will later set on
the ProCurve switches. Do not use any special characters as part of the password.
After you set the Onboard Administrator password, prepare the nodes as described in the
appropriate section for all the server blade nodes in the enclosure:
•
•
•
62
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
Follow the procedures in this section to prepare each node before installing and configuring the
HP XC System Software. Proceed to the following sections, depending on the hardware model:
•
•
•
•
•
•
•
•
•
4.5.1 Preparing HP ProLiant DL140 G2 and G3 Nodes
Use the BIOS Setup Utility to configure the appropriate settings for an HP XC system on HP
ProLiant DL140 G2 and DL140 G3 servers.
For these hardware models you cannot set or modify the default console port password through
the BIOS Setup Utility, as you can for other hardware models. The HP XC System Software
Installation Guide describes how to modify the console port password. You are instructed to
perform the task just after the discovercommand discovers the IP addresses of the console
ports.
Figure 4-1 shows the rear view of the HP ProLiant DL140 G2 server and the appropriate port
assignments for an HP XC system.
Figure 4-1 HP ProLiant DL140 G2 and DL140 G3 Server Rear View
LO100i
1
2
3
HPTC-0144
The callouts in the figure enumerate the following:
1. This port is used for the connection to the Administration Switch (branch or root). On the
back of the node, this port is marked with the number 1(NIC1).
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2(NIC2).
3. This port is used for the connection to the Console Switch. On the back of the node, this port
is marked with LO100i.
Setup Procedure
Perform the following procedure for each HP ProLiant DL140 G2 and DL140 G3 node in the
hardware configuration. Change only the values described in this procedure; do not change any
other factory-set values unless you are instructed to do so. Follow all steps in the sequence shown:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
63
Download from Www.Somanuals.com. All Manuals Search And Download.
2. Turn on power to the node. Watch the screen carefully during the power-on, self-test, and
press the F10 key when prompted to access the BIOS Setup Utility. The Lights-Out 100i
(LO-100i) console management device is configured through the BIOS Setup Utility.
The BIOS Setup Utility displays the following information about the node:
BIOS ROM ID:
BIOS Version:
BIOS Build Date:
Record this information for future reference.
3. For each node, make the following BIOS settings from the Main window. The settings differ
depending upon the generation of hardware model:
•
•
Table 4-3 BIOS Settings for HP ProLiant DL140 G2 Nodes
Menu Name
Main
Submenu Name
Option Name
Numlock
Device
Set to This Value
Disabled
Boot Features
Enabled
Advanced
PCI Device
Configuration/Ethernet
on Board (for
Ethernet 1,2)
Enabled
40h
Option ROM Scan
Latency Timer
Disabled
Advanced
Hyperthreading
Processor Options
BMC COM Port
I/O Device
Serial Port
Configuration
Disabled
Auto Detect
Enabled
Enabled
115.2K
None
SIO COM Port
Mouse controller
Console Redirection Console Redirection
EMS Console
Baud Rate
Flow Control
On
Redirection After
BIOS Post
DHCP
IPMI/LAN Setting
IP Address
Assignment
Enabled
Enabled
Enabled
Disabled
BMC Telnet Service
BMC Ping Response
BMC HTTP Service
Power
Wake On Modem
Ring
64
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-3 BIOS Settings for HP ProLiant DL140 G2 Nodes (continued)
Menu Name
Submenu Name
Option Name
Set to This Value
Disabled
Wake On LAN
Set the following boot order on all nodes
except the head node:
Boot
1. CD-ROM
2. Removable Devices
3. PXE MBA V7.7.2 Slot 0200
4. Hard Drive
5. ! PXE MBA V7.7.2 Slot 0300(!
means disabled)
Set the following boot order on the head
node:
1. CD-ROM
2. Removable Devices
3. Hard Drive
4. PXE MBA v7.7.2 SLot 0200
5. PXE MBA v7.7.2 SLot 0300
Table 4-4 lists the BIOS settings for HP ProLiant DL140 G3 nodes.
Table 4-4 BIOS Settings for HP ProLiant DL140 G3 Nodes
Menu Name
Main
Submenu Name
Option Name
Set to This Value
Disabled
Boot Features
Numlock
Disabled
Advanced
8042 Emulation
Support
BMC
I/O Device
Serial Port
Configuration
Enabled
Enabled
115.2K
Console Redirection Console Redirection
EMS Console
Baud Rate
Enabled
Continue C.R. after
POST
DHCP
IPMI/LAN Settings
IP Address
Assignment
Enabled
Enabled
Enabled
Enabled
BMC Telnet Service
BMC Ping Response
BMC HTTP Service
BMC HTTPS Service
Set the following boot order on the head
node:
Boot
1. CD-ROM
2. Removable Devices
3. Hard Drive
4. Embedded NIC1
5. Embedded NIC2
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
65
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-4 BIOS Settings for HP ProLiant DL140 G3 Nodes (continued)
Menu Name
Submenu Name
Option Name
Set to This Value
Set the following boot order on all nodes
except the head node:
1. CD-ROM
2. Removable Devices
3. Embedded NIC1
4. Hard Drive
5. Embedded NIC2
Enabled
Disabled
Off
Embedded NIC1
PXE
Embedded NIC2
PXE
Power
Resume On Modem
Ring
Disabled
Wake On LAN
4. From the Main window, select Exit→Save Changes and Exit to exit the utility.
5. If the DL140 G3 node uses SATA disks, you must disable the parallel ATA option; otherwise,
the disk might not be recognized and imaged.
Use the following menus to disable this option:
BIOS Menu Name
Submenu Name
Option Name
Set to This Value
Disabled
Advanced
Advanced Chipset
Control
Parallel ATA
6. Repeat this procedure for each HP ProLiant DL140 G2 and G3 node in the HP XC system.
4.5.2 Preparing HP ProLiant DL160 G5 Nodes
Use the BIOS Setup Utility to configure the appropriate settings for an HP XC system on HP
ProLiant DL160 G5 servers.
For this hardware model, you cannot set or modify the default console port password through
the BIOS Setup Utility, as you can for other hardware models. The HP XC System Software
Installation Guide describes how to modify the console port password. You are instructed to
perform the task just after the discovercommand discovers the IP addresses of the console
ports.
Figure 4-2 shows the rear view of the HP ProLiant DL160 G5 server and the appropriate port
assignments for an HP XC system.
Figure 4-2 HP ProLiant DL160 G5 Server Rear View
66
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
The callouts in the figure enumerate the following:
1. This port is used for the connection to the Console Switch. On the back of the node, this port
is marked with LO100i.
2. This port is used for the connection to the Administration Switch (branch or root). On the
back of the node, this port is marked with the number 1(NIC1).
3. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2(NIC2).
Setup Procedure
Perform the following procedure for each HP ProLiant DL160 G5 node in the hardware
configuration. Change only the values described in this procedure; do not change any other
factory-set values unless you are instructed to do so. Follow all steps in the sequence shown:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on, self-test, and
press the F10 key when prompted to access the BIOS Setup Utility. The Lights-Out 100i
(LO-100i) console management device is configured through the BIOS Setup Utility.
The BIOS Setup Utility displays the following information about the node:
BIOS ROM ID:
BIOS Version:
BIOS Build Date:
Record this information for future reference.
3. For each node, make the following BIOS settings from the Main window.
Table 4-5 BIOS Settings for HP ProLiant DL160 G5 Nodes
Menu Name
Main
Submenu Name
Option Name
Set to This Value
Disabled
Boot Features
Numlock
Disabled
Advanced
8042 Emulation
Support
Enabled
Enabled
Remote Access
Configuration
Remote Access
EMS Support
(SPCR)
11520 8,n,1
Always
Serial Port Mode
Redirection after
BIOS POST
VT100
Terminal Type
IPMI Configuration LAN Configuration
Share NIC Mode
Disabled
Enabled
DHCP IP Source
If this node is the head node, set this
value to:
Boot
Boot Device Priority 1st Boot Device
Hard Drive
For all other nodes, set this value to:
Embedded NIC1
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
67
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-5 BIOS Settings for HP ProLiant DL160 G5 Nodes (continued)
Menu Name
Submenu Name
Option Name
Set to This Value
If this node is the head node, set this
value to:
2nd Boot Device
Embedded NIC1
Otherwise, set this value to:
Hard Drive
Enabled
Embedded NIC1
PXE
Disabled
Embedded NIC2
PXE
4. From the Main window, select Exit→Save Changes and Exit to exit the utility.
5. Repeat this procedure for each HP ProLiant DL160 G5 node in the HP XC system.
4.5.3 Preparing HP ProLiant DL360 G4 Nodes
Use the following tools to configure the appropriate settings for HP ProLiant DL360 G4 (including
DL360 G4p) servers:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL360 G4 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-3 shows a rear view of the HP ProLiant DL360 G4 server and the appropriate port
assignments for an HP XC system.
Figure 4-3 HP ProLiant DL360 G4 Server Rear View
1
The callouts in the figure enumerate the following:
1. The iLO Ethernet is the port used as the connection to the Console Switch.
2. NIC1is used for the connection to the Administration Switch (branch or root).
3. NIC2is used for the external connection.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL360 G4
node in the HP XC system:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
68
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Table 4-6 iLO Settings for HP ProLiant DL360 G4 Nodes
Menu Name
Submenu Name
Option Name
Set to This Value
Create a common iLO user name and password for
every node in the hardware configuration. The
password must have a minimum of 8 characters by
default, but this value is configurable.
User
Add
The user Administratoris predefined by default,
but you must create your own user name and
password. For security purposes, HP recommends
that you delete the Administratoruser.
You must use this user name and password to access
the console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI Speed 115200(Press the F10 key to save the setting.)
(bits/seconds)
Perform the following procedure from the RBSU for each node in the hardware configuration:
1. Make the following settings from the Main menu.
Table 4-7 BIOS Settings for HP ProLiant DL360 G4 Nodes
Menu Name
Option Name
Set to This Value
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
Disable
COM2
Advanced Options
System Options
Processor Hyper_threading
Embedded Serial Port
Virtual Serial Port
COM1
Press the Esc key to return to the main
menu.
COM1
BIOS Serial Console &
EMS
BIOS Serial Console Port
115200
BIOS Serial Console Baud Rate
EMS Console
Disable
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL360 G4 node in the HP XC system.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
69
Download from Www.Somanuals.com. All Manuals Search And Download.
Configuring Smart Arrays
On the HP ProLiant DL360 G4 with smart array cards, you must add the disks to the smart array
before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.5.4 Preparing HP ProLiant DL360 G5 Nodes
Use the following tools to configure the appropriate settings for HP ProLiant DL360 G5 servers:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL360 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-4 shows a rear view of the HP ProLiant DL360 G5 server and the appropriate port
assignments for an HP XC system.
Figure 4-4 HP ProLiant DL360 G5 Server Rear View
1
3
2
The callouts in the figure enumerate the following:
1. This port is used for the connection to the Console Switch.
2. This port, NIC1, is used for the connection to the Administration Switch (branch or root).
3. The second onboard NIC is used for the Gigabit Ethernet interconnect or for the connection
to the external network.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL360 G5
node in the HP XC system:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
70
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-8 iLO Settings for HP ProLiant DL360 G5 Nodes
Menu Name
Submenu Name
Option Name
Set to This Value
Create a common iLO user name and password for
every node in the hardware configuration. The
password must have a minimum of 8 characters by
default, but this value is configurable.
User
Add
The user Administratoris predefined by default,
but you must create your own user name and
password. For security purposes, HP recommends
that you delete the Administratoruser.
You must use this user name and password to access
the console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI Speed 115200(Press the F10 key to save the setting.)
(bits/seconds)
Perform the following procedure from the RBSU for each node in the hardware configuration:
ProLiant DL360 G5 Nodes.
Table 4-9 BIOS Settings for HP ProLiant DL360 G5 Nodes
Menu Name
Option Name
Set to This Value
COM2; IRQ3; IO: 2F8h - 2FFh
COM1; IRQ4; IO: 3F8h - 3FFh
System Options
Embedded Serial Port
Virtual Serial Port
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. Floppy Drive (A:)
3. USB DriveKey (C:)
4. PCI Embedded HP NC373i
Multifunction Gigabit Adapter
5. Hard Drive C: (see Boot
Controller Order)
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
COM1; IRQ4; IO: 3F8h - 3FFh
BIOS Serial Console &
EMS
BIOS Serial Console Port
115200
BIOS Serial Console Baud Rate
EMS Console
Disabled
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL360 G5 node in the HP XC system.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
71
Download from Www.Somanuals.com. All Manuals Search And Download.
Configuring Smart Arrays
On the HP ProLiant DL360 G5 nodes with smart array cards, you must add the disks to the smart
array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.5.5 Preparing HP ProLiant DL380 G4 and G5 Nodes
Use the following tools to configure the appropriate settings on HP ProLiant DL380 G4 and G5
servers:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL380 G4 and G5 servers use the iLO utility; thus, they need certain settings that
you cannot make until the iLO has an IP address. The HP XC System Software Installation Guide
provides instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-5 shows a rear view of the HP ProLiant DL380 G4 server and the appropriate port
assignments for an HP XC system.
Figure 4-5 HP ProLiant DL380 G4 Server Rear View
PCI-X PCI-E
3
100
MHz
2
x4
3
2
2
100
MHz
1
x4
iLO
1
133
MHz
N/A
2
1
SCSI Port 1
UID
The callouts in the figure enumerate the following:
1. The iLO Ethernet port is used for the connection to the Console Switch.
2. NIC2is used for the connection to the external network.
3. NIC1is used for the connection to the Administration Switch (branch or root).
Figure 4-6 shows a rear view of the HP ProLiant DL380 G5 server and the appropriate port
assignments for an HP XC system.
72
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-6 HP ProLiant DL380 G5 Server Rear View
5
4
2
1
3
1
2
3
The callouts in the figure enumerate the following:
1. This port is used for the connection to the external network.
2. This port is used for the connection to the Administration Switch (branch or root).
3. The iLO Ethernet port is used for the connection to the Console Switch.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL380 node
in the HP XC system:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
Table 4-10 iLO Settings for HP ProLiant DL380 G4 and G5 Nodes
Submenu
Name
Name
Menu Name
Option Name Set to This Value
Create a common iLO user name and
Administration User
New
password for every node in the hardware
configuration. The password must have a
minimum of 8 characters by default, but
this value is configurable.
The user Administratoris predefined
by default, but you must create your own
user name and password. For security
purposes, HP recommends that you delete
the Administratoruser.
You must use this user name and password
to access the console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
(bits/seconds)
115200(Press the F10 key to save the
setting.)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL380 node in the HP
XC system:
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
73
Download from Www.Somanuals.com. All Manuals Search And Download.
1. Make the following settings from the Main menu. The BIOS settings differ depending upon
the hardware model generation:
•
•
Table 4-11 BIOS Settings for HP ProLiant DL380 G4 Nodes
Menu Name
Option Name
Set to This Value
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
Disable
COM2
Advanced Options
System Options
Processor Hyper_threading
Embedded Serial Port
Virtual Serial Port
COM1
Press the Esc key to return to the main
menu.
COM1
BIOS Serial Console &
EMS
BIOS Serial Console Port
115200
BIOS Serial Console Baud Rate
EMS Console
Disable
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
Table 4-12 lists the BIOS settings for HP ProLiant DL380 G5 BIOS nodes.
Table 4-12 BIOS Settings for HP ProLiant DL380 G5 Nodes
Menu Name
Option Name
Set to This Value
COM1; IRQ4; IO: 3F8h - 3FFh
System Options
Virtual Serial Port
Press the Esc key to return to the main
menu.
COM2; IRQ3; IO: 2F8h - 2FFh
Embedded Serial Port
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. Floppy Drive (A:)
3. USB DriveKey (C:)
4. PCI Embedded HP NC373i
Multifunction Gigabit Adapter
5. Hard Disk C: (see Boot
Controller Order)
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
74
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-12 BIOS Settings for HP ProLiant DL380 G5 Nodes (continued)
Menu Name
Option Name
Set to This Value
Disable
Advanced Options
Processor Hyper_threading
BIOS Serial Console Port
COM1; IRQ4; IO: 3F8h - 3FFh
BIOS Serial Console &
EMS
115200
BIOS Serial Console Baud Rate
EMS Console
Disabled
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL380 G4 and G5 node in the hardware
configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL380 with smart array cards, you must add the
disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.5.6 Preparing HP ProLiant DL580 G4 Nodes
Use the following tools to configure the appropriate settings on HP ProLiant DL580 G4 servers:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL580 G4 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-7 shows a rear view of the HP ProLiant DL580 G4 server and the appropriate port
assignments for an HP XC system.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
75
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-7 HP ProLiant DL580 G4 Server Rear View
1
2
3
The callouts in the figure enumerate the following:
1. NIC1is used for the connection to the Administration Switch (branch or root).
2. NIC2is used for the connection to the external network.
3. The iLO Ethernet port is used for the connection to the Console Switch.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL580 G4
node in the HP XC system:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
76
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-13 iLO Settings for HP ProLiant DL580 G4 Nodes
Submenu
Name
Name
Menu Name
Option Name Set to This Value
Create a common iLO user name and
Administration User
New
password for every node in the hardware
configuration. The password must have a
minimum of 8 characters by default, but
this value is configurable.
The user Administratoris predefined
by default, but you must create your own
user name and password. For security
purposes, HP recommends that you delete
the Administratoruser.
You must use this user name and password
to access the console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
(bits/seconds)
115200(Press the F10 key to save the
setting.)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL580 G4 node in the HP
XC system:
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
77
Download from Www.Somanuals.com. All Manuals Search And Download.
1. Make the following settings from the Main menu. The BIOS settings for HP ProLiant DL580
Table 4-14 BIOS Settings for HP ProLiant DL580 G4 Nodes
Menu Name
Option Name
Set to This Value
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
Disable
COM2
Advanced Options
System Options
Processor Hyper_threading
Embedded Serial Port
Virtual Serial Port
COM1
Press the Esc key to return to the main
menu.
COM1
BIOS Serial Console &
EMS
BIOS Serial Console Port
115200
BIOS Serial Console Baud Rate
EMS Console
Disable
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL580 G4 node in the hardware configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL580 G4 with smart array cards, you must add
the disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.5.7 Preparing HP ProLiant DL580 G5 Nodes
Use the following tools to configure the appropriate settings on HP ProLiant DL580 G5 servers:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL580 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-8 shows a rear view of the HP ProLiant DL580 G5 server and the appropriate port
assignments for an HP XC system.
78
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-8 HP ProLiant DL580 G5 Server Rear View
The callouts in the figure enumerate the following:
1. The iLO Ethernet port is used for the connection to the Console Switch.
2. NIC1is used for the connection to the Administration Switch (branch or root).
3. NIC2is used for the connection to the external network.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL580 G5
node in the HP XC system:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
Table 4-15 iLO Settings for HP ProLiant DL580 G5 Nodes
Submenu
Name
Name
Menu Name
Option Name Set to This Value
Create a common iLO user name and
Administration User
New
password for every node in the hardware
configuration. The password must have a
minimum of 8 characters by default, but
this value is configurable.
The user Administratoris predefined
by default, but you must create your own
user name and password. For security
purposes, HP recommends that you delete
the Administratoruser.
You must use this user name and password
to access the console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
(bits/seconds)
115200(Press the F10 key to save the
setting.)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
79
Download from Www.Somanuals.com. All Manuals Search And Download.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL580 G5 node in the HP
XC system:
1. Make the following settings from the Main menu. The BIOS settings for HP ProLiant DL580
Table 4-16 BIOS Settings for HP ProLiant DL580 G5 Nodes
Menu Name
Option Name
Set to This Value
Set the following boot order on all nodes
except the head node; CD-ROM does not
have to be first in the list, but it must be
listed before the hard disk:
Standard Boot Order IPL
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that
the CD-ROM is listed before the hard disk.
COM2
COM1
System Options
Embedded Serial Port
Virtual Serial Port
Press the Esc key to return to the main
menu.
COM1
BIOS Serial Console &
EMS
BIOS Serial Console Port
115200
BIOS Serial Console Baud Rate
EMS Console
Disable
Command-Line
BIOS Interface Mode
Press the Esc key to return to the main
menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL580 G5 node in the hardware configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL580 G5 with smart array cards, you must add
the disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.5.8 Preparing HP xw8200 and xw8400 Workstations
You can integrate HP xw8200 and xw8400 workstations into an HP XC system as a head node,
service node, or compute node.
Follow the procedures in this section to prepare each workstation before installing and configuring
the HP XC System Software.
Figure 4-9 shows a rear view of an HP xw8200 and xw8400 workstation and the appropriate port
connections for an HP XC system.
80
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-9 HP xw8200 and xw8400 Workstation Rear View
1
The callout in the figure enumerates the following:
1. This port is used for the connection to the administration network.
Setup Procedure
Use the Setup Utility to configure the appropriate settings for an HP XC system.
Perform the following procedure for each workstation in the hardware configuration. Change
only the values that are described in this procedure; do not change any other factory-set values
unless you are instructed to do so:
1. Establish a connection to the console by connecting a monitor and keyboard to the node.
2. Turn on power to the workstation.
3. When the node is powering on, press the F10 key to access the Setup Utility.
4. When prompted, press any key to continue.
5. Select English as the language.
6. Make the following BIOS settings for each workstation in the hardware configuration; BIOS
settings differ depending upon the workstation model:
•
•
Table 4-17 BIOS Settings for xw8200 Workstations
Menu Name
Submenu Name
Option Name
Set to This Value
Set the following boot order on all nodes except
the head node; CD-ROM does not have to be first
in the list, but it must be listed before the hard disk:
Storage
Boot Order
1. CD-ROM
2. Network Controller
3. Hard Disk
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
Disable
Advanced
Processors
Hyper-Threading
Table 4-18 lists the BIOS settings for HP xw8400 workstations.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
81
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-18 BIOS Settings for xw8400 Workstations
Menu Name
Submenu Name
Option Name
Set to This Value
Separate IDE Controller
Storage
Storage Options SATA Emulation
After you make this setting, make sure the Primary
SATA Controllerand Secondary SATA
Controllersettings are set to Enabled.
Set the following boot order on all nodes except
the head node:
Boot Order
1. Optical Drive
2. USB device
3. Broadcom Ethernet controller
4. Hard Drive
5. Intel Ethernet controller
On the head node, set the boot order so that the
Optical Drive is listed before the hard disk.
7. Select File→Save Changes & Exit to exit the Setup Utility.
8. Repeat this procedure for each workstation in the hardware configuration.
9. Turn off power to all nodes except the head node.
10. Follow the software installation instructions in the HP XC System Software Installation Guide
to install the HP XC System Software.
4.5.9 Preparing HP xw8600 Workstations
You can integrate HP xw8600 workstations into an HP XC system as a head node, service node,
or compute node.
Follow the procedures in this section to prepare each workstation before installing and configuring
the HP XC System Software.
Figure 4-10 shows a rear view of an HP xw8600 workstation and the appropriate port connections
for an HP XC system.
Figure 4-10 HP xw8600 Workstation Rear View
The callouts in the figure enumerate the following:
1. This port is used for the connection to the administration network.
2. This port is used for connecting the workstation to an external network.
Setup Procedure
Use the Setup Utility to configure the appropriate settings for an HP XC system.
82
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Perform the following procedure for each workstation in the hardware configuration. Change
only the values that are described in this procedure; do not change any other factory-set values
unless you are instructed to do so:
1. Establish a connection to the console by connecting a monitor and keyboard to the node.
2. Turn on power to the workstation.
3. When the node is powering on, press the F10 key to access the Setup Utility.
4. When prompted, press any key to continue.
5. Select English as the language.
6. Make the following BIOS settings for each workstation in the hardware configuration as
shown in Table 4-19
Table 4-19 BIOS Settings for xw8600 Workstations
Menu Name
Submenu Name
Option Name
Set to This Value
Separate IDE Controller
Storage
Storage Options SATA Emulation
After you make this setting, make sure the Primary
SATA Controllerand Secondary SATA
Controllersettings are set to Enabled.
Set the following boot order on all nodes except
the head node:
Boot Order
1. Optical Drive
2. USB device
3. Broadcom Ethernet controller
4. Hard Drive
5. Broadcom Ethernet controller
On the head node, set the boot order so that the
Optical Drive is listed before the hard disk.
7. Select File→Save Changes & Exit to exit the Setup Utility.
8. Repeat this procedure for each workstation in the hardware configuration.
9. Turn off power to all nodes except the head node.
10. Follow the software installation instructions in the HP XC System Software Installation Guide
to install the HP XC System Software.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems
83
Download from Www.Somanuals.com. All Manuals Search And Download.
4.6 Preparing the Hardware for CP3000BL Systems
Perform the following tasks on each server blade in the hardware configuration after the head
node is installed and the switches are discovered:
•
•
•
•
Set the boot order
Create an iLO2 user name and password
Set the power regulator
Configure smart array devices
Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the
appropriate settings on HP ProLiant Server Blades.
Administrator Password” (page 62), in which you used a browser to log in to the Onboard
Administrator for the enclosure.
Setup Procedure
Use the following procedure to prepare the CP3000BL server blades:
1. In the left frame of the HP Onboard Administrator browser window, click the plus sign (+)
next to Device Bays to display the list of nodes contained in the enclosure.
2. Click the link to the first hardware model in the list. Wait a few seconds until the frame to
the right is populated with node-specific information.
3. Click the Boot Options tab.
a. Select a boot device, and use the up and down arrows on the screen to position the
Table 4-20 Boot Order for HP ProLiant Server Blades
Head Node
All Other Nodes
Set the following boot order on the head node:
1. USB
2. Floppy
3. CD
4. Hard Disk
5. PXE NIC1
Set the following boot order on all nodes except the
head node:
1. USB
2. Floppy
3. CD
4. PXE NIC 1
5. Hard Disk
b. Click the Apply button.
4. In the left frame, do the following to create a new iLO2 user name and password on this
node:
a. Under the hardware model, click iLO.
b. In the body of the main window, click the Web Administration link to open the
Integrated Lights-Out 2 utility in a new window. You might have to turn off popup
blocking for this window to open.
c. In the new window, click the Administration tab.
d. In the left frame, click the User Administration link.
e. Click the New button, and create a new iLO2 user name and password, which must
match the user name and password you set on the Onboard Administrator. Do not use
any special characters as part of the password.
You use this user name and password whenever you need to access the console port
with the telnet cp-nodename command.
84
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with
the letters OA) to provide single sign-on capabilities. Do not remove these accounts.
5. Enable telnet access:
a. In the left frame, click Access.
b. Click the control to enable Telnet Access.
c. Click the Apply button to save the settings.
6. Click the Virtual Devices tab and make the following settings:
a. For every node except the head node, select Noto Automatically Power On Server
because you do not want to automatically turn on power to the node.
b. Click the Submit button.
c. In the left frame, click on the Power Regulator link.
d. Select Enable HP Static High Performance Mode.
e. Click the Apply button to save the settings.
7. Configure disks into the smart array from the remote graphics console.
All server blades have smart array cards, you must add the disk or disks to the smart array
before attempting to image the node.
To set up the smart array device, click the Remote Console tab on the virtual console page
of the iLO2 Web Administration Utility, and then do one of the following depending on the
browser type.
Internet Explorer
If you are using Internet Explorer as your browser, do the following:
a. Click the Integrated Remote Console link to open a remote console window which
provides access to the graphics console virtual media and power functions.
b. In the remote console window, click the Power button.
c. Click the Momentary Press button.
d. Wait a few seconds for the power up phase to begin. Click the MB1 mouse button in
the remote console window to put the pointer focus in this window so that your
keyboard strokes are recognized.
Mozilla Firefox
If you are using Mozilla Firefox as your browser, do the following:
a. Click the Remote Console link to open a virtual console window.
b. Go back to the iLO2 utility Web page and click the Virtual Devices tab.
c. Click the Momentary Press button.
d. Go back to the remote console window. Wait a few seconds for the power up phase to
begin. Click the MB1 mouse button in this window to put the pointer focus in the remote
console window so that your keyboard strokes are recognized in this window.
8. Watch the screen carefully during the power-on self-test phase, and press the F8 key when
you are prompted to configure the disks into the smart array. Select View Logical Drives
to determine if a logical drives exists. If a logical drive is not present, create one.
If you create a logical drive, exit the SmartArray utility and power off the node. Do not let
it try to boot up.
Specific smart array configuration instructions are outside the scope of this document. See
the documentation that came with your model of HP ProLiant server for more information.
9. Use the virtual power functions to turn off power to the server blade.
4.6 Preparing the Hardware for CP3000BL Systems
85
Download from Www.Somanuals.com. All Manuals Search And Download.
10. Close the iLO2 utility Web page.
11. Repeat this procedure from every active Onboard Administrator and make the same settings
for each server blade in each enclosure.
After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation
Guide to discover all the nodes and enclosures in the HP XC system.
86
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
Follow the procedures in this section to prepare each node before installing and configuring the
HP XC System Software. See the following sections depending on the hardware model:
•
•
•
•
•
•
•
•
•
•
•
4.7.1 Preparing HP ProLiant DL145 Nodes
On an HP ProLiant DL145 server, use the following tools to configure the appropriate settings
for an HP XC system:
•
•
BIOS Setup Utility
Intelligent Platform Management Interface (IPMI) Utility
Figure 4-11 shows the rear view of the HP ProLiant DL145 server and the appropriate port
assignments for an HP XC system.
Figure 4-11 HP ProLiant DL145 Server Rear View
PC I-X 13 3
The callouts in the figure enumerate the following:
1. The console Ethernet port is the connection to the Console Switch (branch or root).
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection.
3. NIC1is the connection to the Administration Switch (branch or root). It corresponds to eth0
in Linux if there are no additional optional Ethernet ports installed in expansion slots. On
the HP ProLiant DL145 server, NIC1is the port on the right labeled with the number 1.
Setup Procedure
Perform the following procedure from the BIOS Setup Utility for each HP ProLiant DL145 node
in the hardware configuration:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F10 key when prompted to access the BIOS Setup Utility.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
87
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-21 BIOS Settings for HP ProLiant DL145 Nodes
Menu Name Submenu Name
Option Name
Set to This Value
Enabled(for all nodes except the head
node)
Boot
Boot Settings
Configuration (for
NIC1)
Onboard NIC PXE Option
ROM
Disabled(for the head node)
Shared
Boot Settings
Configuration (for
NIC1)
Onboard NIC PXE Option
ROM
Advanced
Management
Processor
Set Serial Port Sharing
Configuration
Enabled
BIOS Serial Console
Configuration
Redirection After BIOS
Post
1
Maintain the following boot order on all
nodes except the head node; CD-ROM
does not have to be first in the list, but it
must be listed before the hard disk:
Boot
Boot Device Priority
1. CD-ROM
2. NIC1
3. Hard Disk
Set the head node to boot from CD-ROM
first; the hard disk must be listed after
CD-ROM.
1
The NIC1interface is named Broadcom MBA, and it is the second choice with this name from the Boot Screen
Menu→Boot Device Priority.
For each HP ProLiant DL145 node, log in to the IPMI utility and invoke the Terminal mode:
1. Establish a connection to the server by using one of the following methods:
•
•
A serial port connection to the console port
A telnetsession to the IP address of the Management NIC
NOTE: For more information about how to establish these connections, see
“Establishing a Connection Through a Serial Port” (page 141) or the documentation that
came with the HP ProLiant server.
2. Press the Esc key and then press Shift+9 to display the IPMI setup utility.
3. Enter the administrator's user name at the login:prompt (the default is admin).
4. Enter the administrator's password at the password:prompt (the default is admin).
5. Use the C[hange Password] option to change the console port management device password.
The factory default password is admin; change it to the password of your choice. This
password must be the same on every node in the hardware configuration.
ProLiant> ChangePassword
Type the current password> admin
Type the new password (max 16 characters)> your_password
Retype the new password (max 16 characters)> your_password
New password confirmed.
88
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
6. Ensure that all machines are requesting IP addresses through the Dynamic Host Control
Protocol (DHCP). Do the following to determine if DHCP is enabled:
a. At the ProLiant>prompt, enter the following:
ProLiant> net
b. At the INET>prompt, enter the following:
INET> state
iface...ipsrc.....IP addr........subnet.......gateway
1-et1 dhcp 0.0.0.0 255.0.0.0 0.0.0.0
current tick count 2433
ping delay time: 280 ms.
ping host:
0.0.0.0
Task wakeups:netmain: 93
nettick: 4814
telnetsrv: 401
c. If the value for ipsrcis nvmem, enter dhcpat the INET>prompt:
INET> dhcp
Configuring for the enabling of DHCP.
Note: Configuration change has been made, but changes will
not take effect until the processor has been rebooted.
Do you wish to reboot the processor now, may take 10 seconds
(y or n)?
d. Enter yto reboot the processor.
7. If you did not change the DHCP setting, press Shift+Esc+Q, or enter quitat the ProLiant>
prompt to exit the Management Processor CLI and invoke the Console mode.
4.7.2 Preparing HP ProLiant DL145 G2 and DL145 G3 Nodes
Use the BIOS Setup utility on HP ProLiant DL145 G2 and DL145 G3 servers to configure the
appropriate settings for an HP XC system.
For these hardware models, you cannot set or modify the default console port password through
the BIOS Setup Utility the way you can for other hardware models. The HP XC System Software
Installation Guide documents the procedure to modify the console port password. You are
instructed to perform the task just after the discovercommand discovers the IP addresses of
the console ports.
Figure 4-12 shows a rear view of the HP ProLiant DL145 G2 server and the appropriate port
assignments for an HP XC system.
Figure 4-12 HP ProLiant DL145 G2 Server Rear View
LO100i
1
2
3
HPTC-0144
The callouts in the figure enumerate the following:
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
89
Download from Www.Somanuals.com. All Manuals Search And Download.
1. This port is used for the connection to the Administration Switch (branch or root). On the
rear of the node, this port is marked with the number 1(NIC1).
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the rear of the node, this
port is marked with the number 2(NIC2).
3. The port labeled LO100iis used for the connection to the Console Switch.
Figure 4-13 shows a rear view of the HP ProLiant DL145 G3 server and the appropriate port
assignments for an HP XC system.
Figure 4-13 HP ProLiant DL145 G3 Server Rear View
3
1
2
The callouts in the figure enumerate the following:
1. This port is used for the connection to the Administration Switch (branch or root). On the
rear of the node, this port is marked as NIC1.
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the rear of the node, this
port is marked as NIC2.
3. The port labeled LO100iis used for the connection to the Console Switch.
Setup Procedure
Perform the following procedure for each HP ProLiant DL145 G2 and DL145 G3 node in the HP
XC system. Change only the values that are described in this procedure; do not change any
factory-set values unless you are instructed to do so.
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F10 key when prompted to access the BIOS Setup Utility. You configure the
Lights-Out 100i (LO-100i) console management device using this utility.
The BIOS Setup Utility displays the following information about the node:
BIOS ROM ID:
BIOS Version:
BIOS Build Date:
Record this information for future reference.
3. Make the following BIOS settings for each node depending on hardware model:
•
•
Table 4-22 provides the BIOS settings for ProLiant DL145 G2 nodes.
Table 4-23 provides the BIOS settings for ProLiant DL145 G3 nodes.
Table 4-22 BIOS Settings for HP ProLiant DL145 G2 Nodes
Menu Name Submenu Name
Option Name
Numlock
Set to This Value
Off
Main
Boot Options
Disabled
Advanced
MCFG Table
NIC Option
Dedicated NIC
90
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-22 BIOS Settings for HP ProLiant DL145 G2 Nodes (continued)
Menu Name Submenu Name
Hammer Configuration
Option Name
Set to This Value
Enabled
Disable Jitter bit
page Directory Cache
Disabled
Enabled
PCI Configuration/Ethernet Device
On Board (for Ethernet 1 and
2)
Enabled
40h
Option ROM Scan
Latency timer
Serial Port
BMC COM Port
Disabled
Enabled
Enabled
Enabled
115.2K
None
I/O Device Configuration
Console Redirection
SIO COM Port
PS/2 Mouse
Console Redirection
EMS Console
Baud Rate
Flow Control
On
Redirection after BIOS
POST
DHCP
IPMI/LAN Setting
IP Address Assignment
BMC Telnet Service
BMC Ping Response
BMC HTTP Service
Enabled
Enabled
Enabled
Disabled
IPMI
BIOS POST Watchdog
Set the following boot order on
all nodes except the head node:
Boot
1. CD-ROM
2. Removable Devices
3. PXE MBA V7.7.2 Slot 0300
4. Hard Drive
5. ! PXE MBA V7.7.2 Slot 0200 (!
means disabled)
Set the following boot order on
the head node:
1. CD-ROM
2. Removable Devices
3. Hard Drive
4. PXE MBA V7.7.2 Slot 0200
5. PXE MBA V7.7.2 Slot 0300
Disabled
Disabled
Power
Wake On Modem Ring
Wake On LAN
Table 4-23 provides the BIOS settings for ProLiant DL145 G3 nodes
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
91
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-23 BIOS Settings for HP ProLiant DL145 G3 Nodes
Menu Name Submenu Name
Option Name
NumLock
Set to This Value
Off
Main
Boot Options
BMC
Advanced
I/O Device Configuration
Serial Port Mode
Serial port A:
Base I/O address:
Interrupt:
Enabled
3F8
IRQ 4
AUTO
Memory Controller Options DRAM Bank Interleave
Node Interleave
Disabled
Enabled
Enabled
SATA
32-Bit Memory Hole
Serial ATA
Embedded SATA
SATA Mode
Enabled
Enabled/Disable Int13
support
Enabled
Enabled
0040h
Option ROM Scan
Enable Master
Latency Timer
On-board COM A
115.2K
ANSI
Console Redirection
Com Port Address
Baud Rate
Console Type
None
Flow Control
Direct
On
Console connection
Continue C.R. after POST
# of video pages to support
IP Address Assignment
LAN Controller:
1
DHCP
IPMI/LAN Setting
NIC
Disabled
Disabled
Disabled
HPET Timer
8042 Emulation Support
Factory Boot Mode
Set the following boot order on
all nodes except the head node:
Boot
1. Removable Devices
2. CD-ROM Drive
3. MBA v9.0.6 Slot 0820
4. Hard Drive
5. MBA v9.0.6 Slot 0821
Set the following boot order on
the head node:
1. Removable Devices
2. CD-ROM Drive
3. Hard Drive
92
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
4. Select Exit→Saving Changes to exit the BIOS Setup Utility.
5. Repeat this procedure for each HP ProLiant DL145 G2 and DL145 G3 node in the hardware
configuration.
4.7.3 Preparing HP ProLiant DL165 G5 Nodes
Use the BIOS Setup utility on HP ProLiant DL165 G5 servers to configure the appropriate settings
for an HP XC system.
For this hardware model, you cannot set or modify the default console port password through
the BIOS Setup Utility the way you can for other hardware models. The HP XC System Software
Installation Guide documents the procedure to modify the console port password. You are
instructed to perform the task just after the discovercommand discovers the IP addresses of
the console ports.
Figure 4-14 shows a rear view of the HP ProLiant DL165 G5 server and the appropriate port
assignments for an HP XC system.
Figure 4-14 HP ProLiant DL165 G5 Server Rear View
3
1
2
The callouts in the figure enumerate the following:
1. This port is used for the connection to the Administration Switch (branch or root). On the
rear of the node, this port is marked as NIC1.
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the rear of the node, this
port is marked as NIC2.
3. The port labeled LO100iis used for the connection to the Console Switch.
Setup Procedure
Perform the following procedure for each HP ProLiant DL165 G5 node in the HP XC system.
Change only the values that are described in this procedure; do not change any factory-set values
unless you are instructed to do so.
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F10 key when prompted to access the BIOS Setup Utility. You configure the
Lights-Out 100i (LO-100i) console management device using this utility.
The BIOS Setup Utility displays the following information about the node:
BIOS ROM ID:
BIOS Version:
BIOS Build Date:
Record this information for future reference.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
93
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-24 BIOS Settings for HP ProLiant DL165 G5 Nodes
Menu Name Submenu Name
Option Name
Set to This Value
Disabled
3F8
Embedded Serial Port IRQ:
Main
Boot Settings Configuration Bootup Num-Lock
Advanced
I/O Device Configuration
S-ATA Configuration
IRQ 4
Interrupt:
S-ATA
S-ATA Mode
INT13 support
Base Address
Enabled
IRQ [3F8h,4]
Remote Access
Configuration
115200 8,n,1
Always
Serial Port Mode
Redirection of BIOS POST
Terminal Type
ANSI
IPMI Configuration
LAN Configuration:
Share NIC Mode
Disabled
Enabled
DHCP IP Source
Set the following boot order on
all nodes except the head node:
Boot
1. Removable Devices
2. CD-ROM Drive
3. MBA v9.0.6 Slot 0820
4. Hard Drive
5. MBA v9.0.6 Slot 0821
Set the following boot order on
the head node:
1. Removable Devices
2. CD-ROM Drive
3. Hard Drive
4. Select Exit→Saving Changes to exit the BIOS Setup Utility.
5. Repeat this procedure for each HP ProLiant DL165 G5 node in the hardware configuration.
4.7.4 Preparing HP ProLiant DL365 Nodes
On HP ProLiant DL365 servers, use the following tools to configure the appropriate settings for
an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL365 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-15 shows a rear view of the HP ProLiant DL365 server and the appropriate port
assignments for an HP XC system.
94
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-15 HP ProLiant DL365 Server Rear View
The callouts in the figure enumerate the following:
1. This port is the Ethernet connection to the Console Switch. On the back of the node, this
port is marked with the acronym iLO.
2. This port is the connection to the Administration Switch (branch or root). On the back of the
node, this port is marked with the number 1.
3. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL365 node
in the hardware configuration:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
Table 4-25 iLO Settings for HP ProLiant DL365 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
DNS/DHCP
DHCP Enable
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL365 node in the hardware
configuration:
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
95
Download from Www.Somanuals.com. All Manuals Search And Download.
Use the navigation aids shown at the bottom of the screen to move through the menus and
make selections.
Table 4-26 RBSU Settings for HP ProLiant DL365 Nodes
Menu Name
Option Name
Set to This Value
On all nodes except the head node, set this value
System Options
Embedded NIC Port PXE
Support
1
to Enable NIC1 PXE
On the head node only, set this value to Embedded
NIC PXE Disabled
Disabled
Embedded Serial Port
Virtual Serial Port
COM1; IRQ4; IO:3F8h-3FFh
Enabled(all nodes except the head node)
Embedded NIC Port 1 PXE
Support
Disabled(head node only)
Embedded NIC Port 1 PXE
Support
Disabled
Power Regulator for
ProLiant
Set the following boot order on all nodes except
the head node:
Standard Boot Order (IPL)
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
CD-ROM
IPL1
IPL2
IPL3
Floppy Drive (A:)
PCI Embedded HP NC7782 Gigabit Server
Adapter Port 1
Hard Drive (C:)
IPL4
COM1; IRQ4; IO:3F8h-3FFh
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
BIOS Interface Mode
1
A small blue dialog box near the bottom left side of the screen indicates the current setting. You can make only
one setting per node.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL365 node in the hardware configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL365 with smart array cards, you must add the
disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
96
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.7.5 Preparing HP ProLiant DL365 G5 Nodes
On HP ProLiant DL365 G5 servers, use the following tools to configure the appropriate settings
for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL365 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-16 shows a rear view of the HP ProLiant DL365 G5 server and the appropriate port
assignments for an HP XC system.
Figure 4-16 HP ProLiant DL365 G5 Server Rear View
1
3
2
The callouts in the figure enumerate the following:
1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2.
2. This port is the connection to the Administration Switch (branch or root). On the back of the
node, this port is marked with the number 1.
3. This port is the Ethernet connection to the Console Switch. On the back of the node, this
port is marked with the acronym iLO.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL365 G5
node in the hardware configuration:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
97
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-27 iLO Settings for HP ProLiant DL365 G5 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
DNS/DHCP
DHCP Enable
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL365 G5 node in the
hardware configuration:
98
Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Use the navigation aids shown at the bottom of the screen to move through the menus and
make selections.
Table 4-28 RBSU Settings for HP ProLiant DL365 G5 Nodes
Menu Name
Option Name
Set to This Value
On all nodes except the head node, set this value
System Options
Embedded NIC Port PXE
Support
1
to Enable NIC1 PXE
On the head node only, set this value to Embedded
NIC PXE Disabled
Disabled
Embedded Serial Port
Virtual Serial Port
COM1; IRQ4; IO:3F8h-3FFh
Enabled(all nodes except the head node)
Embedded NIC Port 1 PXE
Support
Disabled(head node only)
Embedded NIC Port 1 PXE
Support
Disabled
Power Regulator for
ProLiant
Set the following boot order on all nodes except
the head node:
Standard Boot Order (IPL)
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
CD-ROM
IPL1
IPL2
IPL3
Floppy Drive (A:)
PCI Embedded HP NC7782 Gigabit Server
Adapter Port 1
Hard Drive (C:)
IPL4
COM1; IRQ4; IO:3F8h-3FFh
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
BIOS Interface Mode
1
A small blue dialog box near the bottom left side of the screen indicates the current setting. You can make only
one setting per node.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL365 G5 node in the hardware configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL365 with smart array cards, you must add the
disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems
99
Download from Www.Somanuals.com. All Manuals Search And Download.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.7.6 Preparing HP ProLiant DL385 and DL385 G2 Nodes
On HP ProLiant DL385 and DL385 G2 servers, use the following tools to configure the appropriate
settings for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL385 and DL385 G2 servers use the iLO utility; thus, they need certain settings
that you cannot make until the iLO has an IP address. The HP XC System Software Installation
Guide provides instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-17 shows a rear view of the HP ProLiant DL385 server and the appropriate port
assignments for an HP XC system.
Figure 4-17 HP ProLiant DL385 Server Rear View
3
1
100
M H
z
2
1
0
0
z
M H
3
1
M H
3
3
z
1
2
HPTC-0145
The callouts in the figure enumerate the following:
1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2.
2. This port is the connection to the Administration Switch (branch or root). On the back of the
node, this port is marked with the number 1.
3. This port is the Ethernet connection to the Console Switch. On the back of the node, this
port is marked with the acronym iLO.
Figure 4-18 shows a rear view of the HP ProLiant DL385 G2 server and the appropriate port
assignments for an HP XC system.
100 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-18 HP ProLiant DL385 G2 Server Rear View
5
4
2
1
3
1
2
3
The callouts in the figure enumerate the following:
1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2.
2. This port is the connection to the Administration Switch (branch or root). On the back of the
node, this port is marked with the number 1.
3. This port is the Ethernet connection to the Console Switch. On the back of the node, this
port is marked with the acronym iLO.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL385 and
DL385 G2 node in the hardware configuration:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
3. Make the following iLO settings for each node depending on hardware model:
•
•
Table 4-29 provides the iLO settings for ProLiant DL385 nodes.
Table 4-30 provides the iLO settings for ProLiant DL385 G2 nodes.
Table 4-29 iLO Settings for HP ProLiant DL385 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
DNS/DHCP
DHCP Enable
Table 4-30 lists the iLO settings for ProLiant DL385 G2 nodes.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 101
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-30 iLO Settings for HP ProLiant DL385 G2 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
115200(Press the F10 key to save the setting.)
(bits/seconds)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure from the RBSU for each HP ProLiant DL385 G2 node in the
hardware configuration:
102 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
1. Make the RBSU settings accordingly. The settings differ depending on the hardware model
generation:
•
•
Table 4-31 provides the RBSU settings for the HP ProLiant DL385 nodes.
Table 4-32 provides the RBSU settings for the HP ProLiant DL385 G2 nodes.
Use the navigation aids shown at the bottom of the screen to move through the menus and
make selections.
Table 4-31 RBSU Settings for HP ProLiant DL385 Nodes
Menu Name
Option Name
Set to This Value
On all nodes except the head node, set this value
System Options
Embedded NIC Port PXE
Support
1
to Enable NIC1 PXE
On the head node only, set this value to Embedded
NIC PXE Disabled
Disabled
Embedded Serial Port
Virtual Serial Port
COM1; IRQ4; IO:3F8h-3FFh
Enabled(all nodes except the head node)
Embedded NIC Port 1 PXE
Support
Disabled(head node only)
Embedded NIC Port 1 PXE
Support
Disabled
Power Regulator for
ProLiant
Set the following boot order on all nodes except
the head node:
Standard Boot Order (IPL)
1. CD-ROM
2. NIC1
3. Hard Disk
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
CD-ROM
IPL1
IPL2
IPL3
Floppy Drive (A:)
PCI Embedded HP NC7782 Gigabit Server
Adapter Port 1
Hard Drive (C:)
IPL4
COM1; IRQ4; IO:3F8h-3FFh
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
Disabled
BIOS Interface Mode
Advanced Options
page Directory Cache
(PDC)
1
A small blue dialog box near the bottom left side of the screen indicates the current setting. You can make only
one setting per node.
Table 4-32 lists the RBSU settings for the HP ProLiant DL385 G2 nodes.
Table 4-32 RBSU Settings for HP ProLiant DL385 G2 Nodes
Menu Name
Option Name
Set to This Value
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 103
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-32 RBSU Settings for HP ProLiant DL385 G2 Nodes (continued)
Menu Name
Option Name
Set to This Value
COM2
System Options
Embedded Serial Port
Virtual Serial Port
COM1
Set the following boot order on all nodes except
the head node; CD-ROM does not have to be first
in the list, but it must be listed before the hard
disk:
Standard Boot Order (IPL)
1. CD-ROM
2. Floppy Drive (A:)
3. USB DriveKey (C:)
4. PCI Embedded HP NC373i
Multifunction Gigabit Adapter
5. Hard Disk (C:) (see Boot Controller
Order)
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
COM1
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
BIOS Interface Mode
Press the Esc key to return to the main menu.
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each HP ProLiant DL385 and DL385 G2 node in the hardware
configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL385 and DL385 G2 with smart array cards, you
must add the disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.7.7 Preparing HP ProLiant DL385 G5 Nodes
On HP ProLiant DL385 G5 servers, use the following tools to configure the appropriate settings
for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL385 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-19 shows a rear view of the HP ProLiant DL385 G5 server and the appropriate port
assignments for an HP XC system.
104 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-19 HP ProLiant DL385 G5 Server Rear View
The callouts in the figure enumerate the following:
1. This port is the Ethernet connection to the Console Switch. On the back of the node, this
port is marked with the acronym iLO.
2. This port is the connection to the Administration Switch (branch or root). On the back of the
node, this port is marked with the number 1.
3. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection. On the back of the node, this
port is marked with the number 2.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL385 G5
node in the hardware configuration:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
Table 4-33 iLO Settings for HP ProLiant DL385 G2 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
115200(Press the F10 key to save the setting.)
(bits/seconds)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 105
Download from Www.Somanuals.com. All Manuals Search And Download.
4.7.8 Preparing HP ProLiant DL585 and DL585 G2 Nodes
On HP ProLiant DL585 and DL585 G2 servers, use the following tools to configure the appropriate
settings for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL585 and DL585 G2 servers use the iLO utility; thus, they need certain settings
that you cannot make until the iLO has an IP address. The HP XC System Software Installation
Guide provides instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-20 shows a rear view of the HP ProLiant DL585 server and the appropriate port
assignments for an HP XC system.
Figure 4-20 HP ProLiant DL585 Server Rear View
PCI -X
64-B it
100M Hz
133M Hz
8
7
6
5
4
3
2
1
2
1
iL O
!
2
1
2
1
UID
The callouts in the figure enumerate the following:
1. The iLO Ethernet port is the connection to the Console Switch.
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection.
3. NIC1is the connection to the Administration Switch (branch or root).
Figure 4-21 shows a rear view of the HP ProLiant DL585 G2 server and the appropriate port
assignments for an HP XC system.
Figure 4-21 HP ProLiant DL585 G2 Server Rear View
UID
3
2
1
The callouts in the figure enumerate the following:
106 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
1. NIC1is the connection to the Administration Switch (branch or root).
2. If a Gigabit Ethernet (GigE) interconnect is configured, this port (labeled NIC2) is used for
the interconnect connection. Otherwise, it is used for an external connection.
3. The iLO2Ethernet port is the connection to the Console Switch.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL585 and
DL585 G2 node in the hardware configuration:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
3. Make the following iLO settings for each node depending on hardware model:
•
•
Table 4-34 provides the iLO settings for ProLiant DL585 nodes.
Table 4-35 provides the iLO settings for ProLiant DL585 G2 nodes.
Table 4-34 iLO Settings for HP ProLiant DL585 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
DNS/DHCP
DHCP Enable
Table 4-35 lists the iLO settings for ProLiant DL585 G2 nodes.
Table 4-35 iLO Settings for HP ProLiant DL585 G2 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
115200(Press the F10 key to save the setting.)
(bits/seconds)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 107
Download from Www.Somanuals.com. All Manuals Search And Download.
Perform the following procedure for each HP ProLiant DL585 and DL585 G2 node in the hardware
configuration:
108 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
1. Make the RBSU settings accordingly for each node. The settings differ depending on the
hardware model generation:
•
•
Table 4-36 provides the RBSU settings for the HP ProLiant DL585 nodes.
Table 4-37 provides the RBSU settings for the HP ProLiant DL585 G2 nodes.
Table 4-36 RBSU Settings for HP ProLiant DL585 Nodes
Menu Name
Option Name
Set to This Value
Disabled
System Options
Embedded Serial Port
Virtual Serial Port
COM1; IRQ4; IO:3F8h-3FFh
Enabled(all nodes except the head
node)
Embedded NIC Port 1 PXE
Support
Disabled(head node only)
Embedded NIC Port 1 PXE
Support
Disabled
Power Regulator for ProLiant
Set the following boot order on all
nodes except the head node; the
CD-ROM must be listed before the
hard drive.
Standard Boot Order (IPL)
CD-ROM
IPL1
IPL2
IPL3
Floppy Drive (A:)
PCI Embedded HP NC7782
Gigabit Server Adapter Port
1
Hard Drive (C:)
IPL4
On the head node, set the boot order
so that the CD-ROM is listed before
the hard disk.
COM1; IRQ4; IO:3F8h-3FFh
115200
BIOS Serial Console and EMS
BIOS Serial Console Port
BIOS Serial Console Baud Rate
EMS Console
Disabled
Command Line
Disabled
BIOS Interface Mode
Advanced Options
page Directory Cache (PDC)
Table 4-37 lists the RBSU setting for the DL585 G2 nodes.
Table 4-37 RBSU Settings for HP ProLiant DL585 G2 Nodes
Menu Name
Option Name
Set to This Value
Disabled
COM1
System Options
Embedded Serial Port
Virtual Serial Port
On all nodes except the head node, set this value
to PXE boot
Embedded NICs/NIC 1 boot
Options
Disabled(head node only)
Embedded NICs/NIC 2 boot
Options
OS Control Mode
Power Regulator for
ProLiant
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 109
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-37 RBSU Settings for HP ProLiant DL585 G2 Nodes (continued)
Menu Name
Option Name
Set to This Value
Set the following boot order on all nodes except
the head node; the CD-ROM must be listed before
the hard drive:
Standard Boot Order (IPL)
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 PCI Embedded HP NC373i
Multifunction Gigabit Adapter
• IPL:5 Hard Drive C:
Set the following boot order on the head node:
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 Hard Drive C:
COM1
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
Enabled
Linux x86_64 HPET Option
BIOS Interface Mode
Advanced
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each node HP ProLiant DL585 G2 node in the hardware
configuration.
Configuring Smart Arrays
On such hardware models as the HP ProLiant DL585 and DL585 G2 with smart array cards, you
must add the disks to the smart array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. See the documentation
that came with the HP ProLiant server for more information.
4.7.9 Preparing HP ProLiant DL585 G5 Nodes
On HP ProLiant DL585 G5 servers, use the following tools to configure the appropriate settings
for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL585 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-22 shows a rear view of the HP ProLiant DL585 G5 server and the appropriate port
assignments for an HP XC system.
110 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-22 HP ProLiant DL585 G5 Server Rear View
The callouts in the figure enumerate the following:
1. The iLO2Ethernet port is the connection to the Console Switch.
2. NIC1is the connection to the Administration Switch (branch or root).
3. If a Gigabit Ethernet (GigE) interconnect is configured, this port (labeled NIC2) is used for
the interconnect connection. Otherwise, it is used for an external connection.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL585 G5
node in the hardware configuration:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
Table 4-38 iLO Settings for HP ProLiant DL585 G5 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
115200(Press the F10 key to save the setting.)
(bits/seconds)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure for each HP ProLiant DL585 G5 node in the hardware
configuration:
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 111
Download from Www.Somanuals.com. All Manuals Search And Download.
nodes.
Table 4-39 RBSU Settings for HP ProLiant DL585 G5 Nodes
Menu Name
Option Name
Set to This Value
Disabled
COM1
System Options
Embedded Serial Port
Virtual Serial Port
On all nodes except the head node, set this value
to PXE boot
Embedded NICs/NIC 1 boot
Options
Disabled(head node only)
Embedded NICs/NIC 2 boot
Options
OS Control Mode
Power Regulator for
ProLiant
Set the following boot order on all nodes except
the head node; the CD-ROM must be listed before
the hard drive:
Standard Boot Order (IPL)
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 PCI Embedded HP NC371i
Multifunction Gigabit Adapter
• IPL:5 Hard Drive C:
Set the following boot order on the head node:
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 Hard Drive C:
COM1
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
Enabled
Linux x86_64 HPET Option
BIOS Interface Mode
Advanced
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each node HP ProLiant DL585 G5 node in the hardware
configuration.
Configuring Smart Arrays
On HP ProLiant DL585 G5 nodes with smart array cards, you must add the disks to the smart
array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. For more information,
see the documentation that came with the HP ProLiant server.
112 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
4.7.10 Preparing HP ProLiant DL785 G5 Nodes
On HP ProLiant DL785 G5 servers, use the following tools to configure the appropriate settings
for an HP XC system:
•
•
Integrated Lights Out (iLO) Setup Utility
ROM-Based Setup Utility (RBSU)
HP ProLiant DL785 G5 servers use the iLO utility; thus, they need certain settings that you cannot
make until the iLO has an IP address. The HP XC System Software Installation Guide provides
instructions for using a browser to connect to the iLO to enable telnetaccess.
Figure 4-23 shows a rear view of the HP ProLiant DL785 G5 server and the appropriate port
assignments for an HP XC system.
Figure 4-23 HP ProLiant DL785 G5 Server Rear View
The callouts in the figure enumerate the following:
1. The iLO2Ethernet port is the connection to the Console Switch.
2. NIC1is the connection to the Administration Switch (branch or root).
3. If a Gigabit Ethernet (GigE) interconnect is configured, this port (labeled NIC2) is used for
the interconnect connection. Otherwise, it is used for an external connection.
Setup Procedure
Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL785 G5
node in the hardware configuration:
1. Use the instructions in the accompanying HP ProLiant hardware documentation to connect
a monitor, mouse, and keyboard to the node.
2. Turn on power to the node. Watch the screen carefully during the power-on self-test, and
press the F8 key when prompted to access the Integrated Lights Out Setup Utility.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 113
Download from Www.Somanuals.com. All Manuals Search And Download.
Table 4-40 iLO Settings for HP ProLiant DL785 G5 Nodes
Menu Name
Submenu Name Option Name
Set to This Value
Create a common iLO user name and password for every
node in the hardware configuration. The password must
have a minimum of 8 characters by default, but this value
is configurable.
User
Add
The user Administratoris predefined by default, but
you must create your own user name and password. For
security purposes, HP recommends that you delete the
Administratoruser.
You must use this user name and password to access the
console port.
On
Network
Settings
DNS/DHCP
CLI
DHCP Enable
Serial CLI
Speed
115200(Press the F10 key to save the setting.)
(bits/seconds)
4. Select File→Exit to exit the Integrated Lights Out Setup Utility and resume the power-on
self-test.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based
Setup Utility (RBSU).
Perform the following procedure for each HP ProLiant DL785 G5 node in the hardware
configuration:
114 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
nodes.
Table 4-41 RBSU Settings for HP ProLiant DL785 G5 Nodes
Menu Name
Option Name
Set to This Value
Disabled
COM1
System Options
Embedded Serial Port
Virtual Serial Port
On all nodes except the head node, set this value
to PXE boot
Embedded NICs/NIC 1 boot
Options
Disabled(head node only)
Embedded NICs/NIC 2 boot
Options
OS Control Mode
Power Regulator for
ProLiant
Set the following boot order on all nodes except
the head node; the CD-ROM must be listed before
the hard drive:
Standard Boot Order (IPL)
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 PCI Embedded HP NC373i
Multifunction Gigabit Adapter
• IPL:5 Hard Drive C:
Set the following boot order on the head node:
• IPL:1 CD-ROM
• IPL:2 Floppy Drive (A:)
• IPL:3 USB Drive Key (C:)
• IPL:4 Hard Drive C:
COM1
BIOS Serial Console and BIOS Serial Console Port
EMS
115200
BIOS Serial Console Baud
Rate
Disabled
EMS Console
Command Line
BIOS Interface Mode
2. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the
boot sequence.
3. Repeat this procedure for each node HP ProLiant DL785 G5 node in the hardware
configuration.
Configuring Smart Arrays
On HP ProLiant DL785 G5 nodes with smart array cards, you must add the disks to the smart
array before attempting to image the node.
To do so, watch the screen carefully during the power-on self-test phase of the node, and press
the F8 key when prompted to configure the disks into the smart array.
Specific instructions are outside the scope of the HP XC documentation. For more information,
see the documentation that came with the HP ProLiant server.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 115
Download from Www.Somanuals.com. All Manuals Search And Download.
4.7.11 Preparing HP xw9300 and xw9400 Workstations
HP xw9300 and xw9400 workstations are typically used when the HP Scalable Visual Array
(SVA) software is installed and configured to interoperate on an HP XC system. Configuring an
xw9300 or xw9400 workstation as the HP XC head node is supported.
Figure 4-24 shows a rear view of the xw9300 workstation and the appropriate port connections
for an HP XC system.
Figure 4-24 xw9300 Workstation Rear View
1
The callout in the figure enumerates the following:
1. This port is used for the connection to the administration network.
Figure 4-25 shows a rear view of the xw9400 workstation and the appropriate port connections
for an HP XC system.
Figure 4-25 xw9400 Workstation Rear View
1
2
The callouts in the figure enumerate the following:
1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect
connection. Otherwise, it is used for an external connection.
2. This port is used for the connection to the administration network.
116 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Setup Procedure
Use the Setup Utility to configure the appropriate settings for an HP XC system.
Perform the following procedure for each workstation in the hardware configuration. Change
only the values described in this procedure; do not change any other factory-set values unless
you are instructed to do so:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node and establish a connection to the console.
2. Turn on power to the workstation.
3. When the node is powering up, press the F10 key to access the Setup Utility.
4. When prompted, press any key to continue.
5. Select English as the language.
6. Make the appropriate settings for the workstation depending on hardware model:
•
•
Table 4-42 describes the settings for the xw9300 workstation.
Table 4-43 describes the settings for the xw9400 workstation.
Table 4-42 Setup Utility Settings for xw9300 Workstations
Menu Name
Submenu Name
Option Name
Set to This Value
Set the following boot order on all nodes except
the head node; CD-ROM does not have to be first
in the list, but it must be listed before the hard disk:
Storage
Boot Order
1. IDE CD-ROM Drive
2. USB device
3. Network Controller
4. Hard Drive (C:)
On the head node, set the boot order so that the
CD-ROM is listed before the hard disk.
On
Power/Sleep/Wake After Power Loss
Advanced
Table 4-43 describes the settings for the xw9400 workstation.
Table 4-43 Setup Utility Settings for xw9400 Workstations
Menu Name
Submenu Name
Option Name
Set to This Value
Set the following boot order on all nodes except
the head node; CD-ROM does not have to be first
in the list, but it must be listed before the hard disk:
Storage
Boot Order
1. Optical Drive
2. USB device
3. nVidia Network Controller 1
4. Hard Drive (C:)
5. nVidia Network Controller 2
Set the boot order on the head node as follows:
1. Optical Drive
2. USB device
3. Hard Drive (C:)
4. nVidia Network Controller 1
5. nVidia Network Controller 2
On
Power/Sleep/Wake After Power Loss
Advanced
7. Select File→Save Changes & Exit to exit the Setup Utility.
8. Repeat this procedure for each workstation in the hardware configuration.
9. Turn off power to all nodes except the head node.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems 117
Download from Www.Somanuals.com. All Manuals Search And Download.
10. Follow the software installation instructions in the HP XC System Software Installation Guide
to install the HP XC System Software.
118 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
4.8 Preparing the Hardware for CP4000BL Systems
Perform the following tasks on each server blade in the hardware configuration after the head
node is installed and the switches are discovered:
•
•
•
•
Set the boot order
Create an iLO2 user name and password
Set the power regulator
Configure smart array devices
Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the
appropriate settings on HP ProLiant Server Blades.
Administrator Password” (page 62), in which you used a browser to log in to the Onboard
Administrator for the enclosure.
Setup Procedure
Use the following procedure to prepare the CP4000BL server blades:
1. In the left frame of the HP Onboard Administrator browser window, click the plus sign (+)
next to Device Bays to display the list of nodes contained in the enclosure.
2. Click the link to the first hardware model in the list. Wait a few seconds until the frame to
the right is populated with node-specific information.
3. Click the Boot Options tab.
a. Select a boot device, and use the up and down arrows on the screen to position the
Table 4-44 Boot Order for HP ProLiant Server Blades
Head Node
All Other Nodes
Set the following boot order on the head node:
1. USB
2. Floppy
3. CD
4. Hard Disk
5. PXE NIC1
Set the following boot order on all nodes except the
head node:
1. USB
2. Floppy
3. CD
4. PXE NIC 1
5. Hard Disk
b. Click the Apply button.
4. In the left frame, do the following to create a new iLO2 user name and password on this
node:
a. Under the hardware model, click iLO.
b. In the body of the main window, click the Web Administration link to open the
Integrated Lights-Out 2 utility in a new window. You might have to turn off popup
blocking for this window to open.
c. In the new window, click the Administration tab.
d. In the left frame, click the User Administration link.
e. Click the New button, and create a new iLO2 user name and password, which must
match the user name and password you set on the Onboard Administrator. Do not use
any special characters as part of the password.
You use this user name and password whenever you need to access the console port
with the telnet cp-nodename command.
4.8 Preparing the Hardware for CP4000BL Systems 119
Download from Www.Somanuals.com. All Manuals Search And Download.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with
the letters OA) to provide single sign-on capabilities. Do not remove these accounts.
5. Enable telnet access:
a. In the left frame, click Access.
b. Click the control to enable Telnet Access.
c. Click the Apply button to save the settings.
6. Click the Virtual Devices tab and make the following settings:
a. For every node except the head node, select Noto Automatically Power On Server
because you do not want to automatically turn on power to the node.
b. Click the Submit button.
c. In the left frame, click on the Power Regulator link.
d. Select Enable OS Control Mode.
e. Click the Apply button to save the settings.
7. Configure disks into the smart array from the remote graphics console.
All server blades have smart array cards, you must add the disk or disks to the smart array
before attempting to image the node.
To set up the smart array device, click the Remote Console tab on the virtual console page
of the iLO2 Web Administration Utility, and then do one of the following depending on the
browser type.
Internet Explorer
If you are using Internet Explorer as your browser, do the following:
a. Click the Integrated Remote Console link to open a remote console window which
provides access to the graphics console virtual media and power functions.
b. In the remote console window, click the Power button.
c. Click the Momentary Press button.
d. Wait a few seconds for the power up phase to begin. Click the MB1 mouse button in
the remote console window to put the pointer focus in this window so that your
keyboard strokes are recognized.
Mozilla Firefox
If you are using Mozilla Firefox as your browser, do the following:
a. Click the Remote Console link to open a virtual console window.
b. Go back to the iLO2 utility Web page and click the Virtual Devices tab.
c. Click the Momentary Press button.
d. Go back to the remote console window. Wait a few seconds for the power up phase to
begin. Click the MB1 mouse button in this window to put the pointer focus in the remote
console window so that your keyboard strokes are recognized in this window.
8. Watch the screen carefully during the power-on self-test phase, and press the F8 key when
you are prompted to configure the disks into the smart array. Select View Logical Drives
to determine if a logical drives exists. If a logical drive is not present, create one.
If you create a logical drive, exit the SmartArray utility and power off the node. Do not let
it try to boot up.
Specific smart array configuration instructions are outside the scope of this document. See
the documentation that came with your model of HP ProLiant server for more information.
120 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
9. Perform this step for HP ProLiant BL685c nodes; proceed to the next step for all other
hardware models.
On an HP ProLiant BL685c node, watch the screen carefully during the power-on self-test,
and press the F9 key to access the ROM-Based Setup Utility (RBSU) to enable HPET as
Table 4-45 Additional BIOS Setting for HP ProLiant BL685c Nodes
Menu Name
Option Name
Set To This Value
Enabled
Advanced
Linux x86_64 HPET
Press the F10 key to exit the RBSU. The server automatically restarts.
10. Use the virtual power functions to turn off power to the server blade.
11. Close the iLO2 utility Web page.
12. Repeat this procedure from every active Onboard Administrator and make the same settings
for each server blade in each enclosure.
After preparing all the nodes in the enclosures, return to the HP XC System Software Installation
Guide to discover all the nodes and enclosures in the HP XC system.
4.8 Preparing the Hardware for CP4000BL Systems 121
Download from Www.Somanuals.com. All Manuals Search And Download.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems
Follow the procedures in this section to prepare HP Integrity servers before installing and
configuring the HP XC System Software. The following topics, including setup information for
specific hardware models, are discussed:
•
•
•
•
•
•
About the EFI Boot Manager User Interface:
Two user interfaces for the EFI Boot Manager utility are available: the Enhanced and Legacy
interfaces. The setup instructions described here are based on the Enhanced interface. To change
the user interface on the system, use the Set User Interface menu option.
4.9.1 Setting Static IP Addresses on Integrity Servers
IMPORTANT: Integrity nodes are configured with static addresses, not DHCP. On an HP XC
system with one or more such nodes, you must configure the Management Processor (MP) on
all nodes for static IP addresses rather than DHCP.
You configure IP addresses from the Command Menu of the Management Processor (MP). The
LCcommand configures the IP address, subnet mask, and default gateway address for the HP
Integrity MP interfaces; do not configure an IP address for the slave MP.
You must plan the IP addresses ahead of time. The following example illustrates how to plan IP
addresses. The information required for a 16–node cluster rx8620 server is as follows:
•
•
•
Gateway address is 172.21.0.16(default based on 16 nodes).
Subnet mask address is 255.0.0.0.
Table 4-46 Setting Static IP Addresses for MP Power Management Devices
Node
IP Address
172.21.0.15
172.21.0.14
172.21.0.13
(. . .)
26xx ProCurve Port
First node after the head node is n15
Second node after the head node is n14
Third node after the head node is n13
(. . .)
First rx8620 master MP
First rx8620 slave MP
172.21.0.x
N/A
n
n-1
n-2
n-3
Second rx8620 master MP
Second rx8620 slave MP
172.21.0.x-2
N/A
4.9.2 Preparing HP Integrity rx1620 and rx2600 Nodes
Figure 4-26 shows a rear view of the HP Integrity rx1620 server and the appropriate port
assignments for an HP XC system.
122 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-26 HP Integrity rx1620 Server Rear View
LAN 10/100
CONSOLE
/
REMOTE UPS
/
GSP RESETS
PCI-X 133
PCI-X 133
SOFTHARD
SCSI LVD/SE
LAN Gb A
USB
LAN Gb
B
SERIAL
The callouts in the figure enumerate the following:
1. The port labeled LAN 10/100is the MP connection to the ProCurve Console Switch.
2. The port labeled LAN Gb Aconnects to the Administration Switch (branch or root).
3. The port labeled LAN Gb Bis used for an external connection.
Figure 4-27 shows a rear view of the HP Integrity rx2600 server and the appropriate port
assignments for an HP XC system.
The high-speed interconnect card such as an InfiniBand or QsNetII card must be inserted into
the top PCI-X slot. The external connection is made on the Ethernet adapter card.
Figure 4-27 HP Integrity rx2600 Server Rear View
PWR
2
PWR
1
PCI-X 133
PCI-X 133
PCI-X 133
PCI-X 133
Management Card
CONSOLE
SERIAL A
LAN 10/100
VGACONSOLE / REMOTE / UPS
GSP RESETS
SOFTHARD
TOC
SERIAL B
SCSI LVD/SE
LAN 10/100
USB
LAN Gb
The callouts in the figure enumerate the following:
1. The top port labeled LAN 10/100is the MP connection to the ProCurve Console Switch.
2. The port labeled LAN Gbconnects to the Administration Switch (branch or root).
3. The bottom port labeled LAN 10/100is unused.
Setup Procedure
Perform the following procedure on each HP Integrity server model rx1620 and rx2600:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. For each node in the system, ensure that the power cord is connected but that the is not
turned on.
3. Follow this procedure to connect a personal computer to the Management Processor:
a. Connect a three-way DB9-25 cable to the MP DB-25 port on the back of the HP Integrity
rx2600 server.
b. Connect the CONSOLE connector to a null modem cable, and connect the null modem
cable to the PC COM1port.
c. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
d. Press the Enter key to access the MP. If there is no response, press the MP reset pin on
the back of the MP and try again.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 123
Download from Www.Somanuals.com. All Manuals Search And Download.
e. Log in to the MP using the default user name and password shown on the screen. The
MP Main Menu appears:
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
4. Enter SLto show event logs. Then, enter Cto clear all log files and Yto confirm.
5. Enter CMto display the Command Menu.
6. Perform the following steps to ensure that the IPMI over LAN option is set. This setting is
required for Nagios monitoring.
a. Enter LC.
b. Verify the IPMI over LAN option is enabled.
c. Enable this option if it is disabled.
d. Return to the Command Menu.
7. Enter UCand use the menu options to remove the default MP user name and password and
create your own unique user name and password. HP recommends setting your own user
name and password for security purposes.
The user name must have a minimum of 6 characters, and the password must have a
minimum of 8 characters. You must set the same user name and password on every node.
The user name and password are required to access the power management device and
console.
9. Enter PC(power cycle) and then enter onto turn on power to the node.
10. Press Ctrl-B to return to the MP Main Menu.
11. Enter COto connect to the console.
12. Perform this step on all nodes except the head node. From the EFI Boot Manager menu,
which is displayed when the node is powering on, select the Boot Configuration menu.
Do the following from the Boot Configuration menu:
a. Select Add Boot Entry.
b. Select the network boot device, which is a Gigabit Ethernet (GigE) port:
•
•
On HP Integrity rx1620 servers, select Load File [Core LAN Gb A].
On HP Integrity rx2600 servers, select
[Acpi(HWP0002,100)/Pci(2|0)/Mac(XXXXXXXXXXXX)].
c. Enter the string Netbootas the boot option description. This entry is required and
must be set to the string Netboot(with a capital letter N).
d. Press the Enter key when prompted to enter a load option.
e. If prompted, save the entry to NVRAM.
f. Enter xto return to the previous menu.
For more information about how to work with these menus, see the documentation that
came with the HP Integrity server.
124 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu,
select the Edit OS Boot Order option.
Do the following from the Edit OS Boot Order option:
a. Use the navigation instructions shown on the screen to move the Netbootentry you
just defined to the top of the boot order.
b. If prompted, save the setting to NVRAM.
c. Enter xto return to the previous menu.
14. Perform this step on all nodes, including the head node.
Select the Console Configuration option, and do the following:
a. Enable the Serial Acpi(HWP0002,PNP0A03,700)/Pci(1|1) Vt100+ 9600 option.
b. Press the Esc key or enter xas many times as necessary to return to the Boot
Configuration menu.
c. When prompted, save the entry to NVRAM.
15. Turn off power to the node:
a. Press Ctrl-B to exit console mode.
b. Enter CMto display the Command Menu.
c. Enter PCand enter offto turn off power to the node.
4.9.3 Preparing HP Integrity rx2620 Nodes
Figure 4-28 shows a rear view of the HP Integrity rx2620 server and the appropriate port
assignments for an HP XC system.
Figure 4-28 HP Integrity rx2620 Server Rear View
PWR
2
PWR
1
PCI-X 133
PCI-X 133
PCI-X 133
PCI-X 133
Management Card
CONSOLE
SERIAL A
LAN 10/100
VGACONSOLE / REMOTE / UPS
GSP RESETS
SOFTHARD
TOC
SERIAL B
SCSI LVD/SE
LAN Gb B
USB
LAN Gb A
The callouts in the figure enumerate the following:
1. The port labeled LAN 10/100is the MP connection to the ProCurve Console Switch.
2. The port labeled LAN Gb Aconnects to the Administration Switch (branch or root).
3. The port labeled LAN Gb Bis used for an external connection.
Setup Procedure
Perform the following procedure on each HP Integrity rx2620 server:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. For each node in the system, ensure that the power cord is connected but that the processor
is not turned on.
3. Follow this procedure to connect a personal computer to the Management Processor:
a. Connect a three-way DB9-25 cable to the MP DB-25 port on the back of the HP Integrity
rx2620 server.
b. Connect the CONSOLE connector to a null modem cable, and connect the null modem
cable to the PC COM1port.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 125
Download from Www.Somanuals.com. All Manuals Search And Download.
c. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
d. Press the Enter key to access the MP. If there is no response, press the MP reset pin on
the back of the MP and try again.
e. Log in to the MP using the default user name and password shown on the screen. The
MP Main Menu appears:
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
4. Enter SLto show event logs. Then, enter Cto clear all log files and Yto confirm.
5. Enter CMto display the Command Menu.
6. Perform the following steps to ensure that the IPMI over LAN option is set. This setting is
required for Nagios monitoring.
a. Enter LC.
b. Verify the IPMI over LAN option is enabled.
c. Enable this option if it is disabled.
d. Return to the Command Menu.
7. Enter UCand use the menu options to remove the default MP user name and password and
create your own unique user name and password. HP recommends setting your own user
name and password for security purposes.
The user name must have a minimum of 6 characters, and the password must have a
minimum of 8 characters. You must set the same user name and password on every node.
The user name and password are required to access the power management device and
console.
9. Enter PC(power cycle) and then enter onto turn on power to the node.
10. Press Ctrl-B to return to the MP Main Menu.
11. Enter COto connect to the console.
12. Perform this step on all nodes except the head node. From the Boot Menu screen, which is
displayed during the power on of the node, select the Boot Configuration Menu.
Do the following from the Boot Configuration Menu:
a. Select Add Boot Entry.
b. Select Load File [Core LAN Gb A] as the network boot choice, which is a Gigabit
Ethernet (GigE) port.
c. Enter the string Netbootas the boot option description. This entry is required and
must be set to the string Netboot(with a capital letter N).
d. Press the Enter key for no db-profile options.
e. Press the Enter key for no boot options.
f. If prompted, save the entry to NVRAM.
For more information about how to work with these menus, see the documentation that
came with the HP Integrity server.
126 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu,
select the Edit OS Boot Order option.
Do the following:
a. Use the navigation instructions on the screen to move the Netbootentry you just
defined to the top of the boot order.
b. If prompted, press the Enter key to select the position.
c. Enter xto return to the Boot Configuration menu.
14. Perform this step on all nodes, including the head node.
Select the Console Configuration option, and do the following:
a. Enable the Serial Acpi(HWP0002,PNP0A03,700)/Pci(1|1) Vt100+ 9600 option.
b. Press the Esc key or enter xas many times as necessary to return to the Boot
Configuration menu.
c. When prompted, save the entry to NVRAM.
15. Turn off power to the node:
a. Press Ctrl-B to exit the console mode.
b. Enter CMto display the Command Menu.
c. Enter PCand enter offto turn off power to the node.
4.9.4 Preparing HP Integrity rx2660 Nodes
Figure 4-29 shows a rear view of the HP Integrity rx2660 server and the appropriate port
assignments for an HP XC system.
Figure 4-29 HP Integrity rx2660 Server Rear View
1
2
3
The callouts in the figure enumerate the following:
1. The LAN port labeled Gb 1connects to the Administration Switch (branch or root).
2. The LAN port labeled Gb 2is used for an external connection.
3. The port labeled MP LANis the MP connection to the ProCurve Console Switch.
Setup Procedure
Perform the following procedure on each HP Integrity rx2660 server:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. For each node in the system, ensure that the power cord is connected but that the processor
is not turned on.
3. Follow this procedure to connect a personal computer to the Management Processor (MP):
a. Connect a cable to the console port (located to the left of the MP LANport) on the back
of the HP Integrity rx2660 server.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 127
Download from Www.Somanuals.com. All Manuals Search And Download.
b. Connect the CONSOLE connector to a null modem cable, and connect the null modem
cable to the PC COM1port.
c. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
d. Press the Enter key to access the MP. If the MP does not respond, press the MP reset
pin on the back of the MP and try again.
e. Log in to the MP using the default user name and password shown on the screen. The
MP Main Menu appears:
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
4. Enter SLto show event logs. Then, enter Cto clear all log files and Yto confirm.
5. Enter CMto display the Command Menu.
6. Perform the following steps to ensure that the IPMI over LAN option is set. This setting is
required for Nagios monitoring.
a. Enter SA.
b. Verify the IPMI over LAN option is enabled.
c. Enable this option if it is disabled.
d. Return to the Command Menu.
7. Enter UCand use the menu options to remove the default MP user name and password and
create your own unique user name and password. HP recommends setting your own user
name and password for security purposes.
The user name must have a minimum of 6 characters, and the password must have a
minimum of 8 characters. You must set the same user name and password on every node.
You must have this user name and password to access the power management device and
console.
9. Enter PC(power cycle) and then enter onto turn on power to the node.
10. Press Ctrl-B to return to the MP Main Menu.
11. Enter COto connect to the console.
12. Perform this step on all nodes except the head node. From the Boot Menu screen, which is
displayed during the power on of the node, select the Boot Configuration Menu.
Do the following from the Boot Configuration Menu:
a. Select Add Boot Entry.
b. Select Load File [Core LAN Gb A] as the network boot choice, which is a Gigabit
Ethernet (GigE) port.
c. Enter the string Netbootas the boot option description. This entry is required and
must be set to the string Netboot(with a capital letter N).
d. Press the Enter key for no boot options.
e. If prompted, save the entry to NVRAM.
For more information about how to work with these menus, see the documentation that
came with the HP Integrity server.
128 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu,
select the Edit OS Boot Order option.
Do the following:
a. Use the navigation instructions on the screen to move the Netbootentry you just
defined to the top of the boot order.
b. If prompted, press the Enter key to select the position.
c. Enter xto return to the Boot Configuration menu.
14. Perform this step on all nodes, including the head node. Select the Console Configuration
option, and set Serial Acpi(HWP0002,PNP0A03,0)/Pci(1|2) Vt100+ 9600 as the primary
console interface.
15. Press the bkey to set the baud rate to 115200.
16. Press the Esckey or enter xas many times as necessary to return to the Boot Menu.
17. Turn off power to the node:
a. Press Ctrl-B to exit the console mode.
b. Enter CMto display the Command Menu.
c. Enter PCand enter offto turn off power to the node.
4.9.5 Preparing HP Integrity rx4640 Nodes
Figure 4-30 shows a rear view of the HP Integrity rx4640 server and the appropriate port
assignments for an HP XC system.
Figure 4-30 HP Integrity rx4640 Server Rear View
1
2
3
HPTC-0146
The callouts in the figure enumerate the following:
1. The port labeled MP LANis the MP connection to the ProCurve Console Switch.
2. The port labeled LAN Gbconnects to the Administration Switch (branch or root).
3. This unlabeled port is used for an external connection.
Setup Procedure
Perform the following procedure on each HP Integrity rx4640 server:
1. Use the instructions in the accompanying hardware documentation to connect a monitor,
mouse, and keyboard to the node.
2. For each node in the system, ensure that the power cord is connected but that the processor
is not turned on.
3. Follow this procedure to connect a personal computer to the Management Processor:
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 129
Download from Www.Somanuals.com. All Manuals Search And Download.
a. Connect a three-way DB9-25 cable to the MP DB-9 port on the back of the HP Integrity
rx4640 server.
This port is the first of the four DB9 ports at the bottom left of the server; it is labeled
MP Local.
b. Connect the CONSOLE connector to a null modem cable, and connect the null modem
cable to the PC COM1port.
c. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
d. Press the Enter key to access the MP. If the MP does not respond, press the MP reset
pin on the back of the MP and try again.
e. Log in to the MP using the default user name and password shown on the screen. The
MP Main Menu appears:
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
4. Enter SLto show event logs. Then, enter Cto clear all log files and Yto confirm.
5. Enter CMto display the Command Menu.
6. Perform the following steps to ensure that the IPMI over LAN option is set. This setting is
required for Nagios monitoring.
a. Enter LC.
b. Verify the IPMI over LAN option is enabled.
c. Enable this option if it is disabled.
d. Return to the Command Menu.
7. Enter UCand use the menu options to remove the default MP user name and password and
create your own unique user name and password. HP recommends setting your own user
name and password for security purposes.
The user name must have a minimum of 6 characters, and the password must have a
minimum of 8 characters. You must set the same user name and password on every node.
You must have this user name and password to access the power management device and
console.
9. Enter PC(power cycle) and then enter onto turn on power to the node.
10. Press Ctrl-B to return to the MP Main Menu.
11. Enter COto connect to the console.
12. Perform this step on all nodes except the head node. From the Boot Menu screen, which is
displayed during the power on of the node, select the Boot Configuration Menu.
Do the following from the Boot Configuration Menu:
a. Select Add Boot Entry.
b. Select Load File [Core LAN Gb A] as the network boot choice, which is a Gigabit
Ethernet (GigE) port.
c. Enter the string Netbootas the boot option description. This entry is required and
must be set to the string Netboot(with a capital letter N).
d. Press the Enter key for no db-profile options.
130 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
e. Press the Enter key for no boot options.
f. When prompted, save the entry to NVRAM.
For more information about how to work with these menus, see the documentation that
came with the HP Integrity server.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu,
select the Edit OS Boot Order option.
Do the following:
a. Use the navigation instructions on the screen to move the Netbootentry you just
defined to the top of the boot order.
b. If prompted, press the Enter key to select the position.
c. Enter xto return to the Boot Configuration menu.
14. Perform this step on all nodes, including the head node. Select the Console Configuration
option, and do the following:
a. Enable the P Serial Acpi(HWP0002,PNP 0A03,0)/Pci(1|1) Vt100+ 9600 option on HP
Integrity rx4640 servers.
b. When prompted, save the entry to NVRAM.
c. Enter xto return to the previous menu.
15. Turn off power to the node:
a. Press Ctrl-B to exit the console mode.
b. Enter CMto display the Command Menu.
c. Enter PCand enter offto turn off power to the node.
4.9.6 Preparing HP Integrity rx8620 Nodes
Follow this procedure to prepare each HP Integrity rx8620 node.
IMPORTANT: Integrity rx8620 Nodes are configured with static addresses, not DHCP. On an
HP XC system with one or more Integrity rx8620 nodes, you must configure the Management
Processor (MP) on all nodes for static IP addresses rather than DHCP.
General Hardware Preparation
The connection of nodes to ProCurve switch ports is important for the automatic discovery
process.
1. Connect the Gigabit Ethernet ports on the HP Integrity rx8620 Core IO boards into the
ProCurve 28xx switch at the next available ports. Use one Gigabit Ethernet port for each
partition, for a total of two ports for one HP Integrity rx8620 node.
On each HP Integrity rx8620 node, connect partition 0 to the highest-numbered available
open port, and connect partition 1 to the next lower-numbered port. Repeat this step for the
next HP Integrity rx8620, and so on.
2. Connect the Quadrics boards in the HP Integrity rx8620 partitions to the Quadrics switch
using the same pattern as the Gigabit Ethernet connections to the ProCurve 28xx switches.
Connect partition 0 of the first HP Integrity rx8620 server to the highest available Quadrics
switch port, which is followed by partition 1 to the next highest available switch port, which
is followed by partition 0 of the second HP Integrity rx8620 server, and so on.
Figure 4-31 shows the HP Integrity rx8620 Core IO board and the appropriate connections to the
administration network and console network.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 131
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure 4-31 HP Integrity rx8620 Core IO Board Connections
MP LAN
SYS LAN
Preparing Individual Nodes
Follow this procedure for each HP Integrity rx8620 node in the hardware configuration:
1. Ensure that the power cord is connected but that the processor is not turned on.
2. Connect a personal computer to the Management Processor (MP):
a. Connect a null modem serial cable between the MP serial port and the PC COM1port.
b. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
c. Press the Enter key to access the MP. If there is no response, press the MP reset pin on
the back of the MP and try again.
d. Log in to the MP using the default user name and password shown on the screen. The
MP Main Menu appears:
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
3. Enter SLto clear the error logs (CLR).
4. Enter CMto display the Command Menu.
132 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
NOTE: Most of the MP commands of the HP Integrity rx8620 are similar to the HP Integrity
rx2600 MP commands, but there are some differences. The two MPs for the HP Integrity
rx8620 operate in a master/slave relationship. Only the master MP, which is on Core IO
board 0, is assigned an IP address. Core IO board 0 is always the top Core IO board. The
slave MP is used only if the master MP fails.
5. Enter LCto configure the IP address, subnet mask, and default gateway address for the HP
Integrity rx8620 master MP interfaces; do not configure an IP address for the slave MP.
addresses.
Also verify the IPMI over LAN option is enabled. Enable this option if it is disabled. This
setting is required for Nagios monitoring.
Return to the Command Menu.
6. Enter XDto apply your changes. Enter Rto restart the MP.
7. Enter CMto return to the Command Menu.
8. Enter SOto set the MP user name and password. The user name must have a minimum of
6 characters, and the password must have a minimum of 8 characters. You must set the same
user name and password on every node.
IMPORTANT: The remaining steps in this procedure (setting the network boot option, the
boot order, and console options) must be performed twice, once for each partition on the
HP Integrity rx8620 system.
On HP Integrity rx8620 systems, some MP commands, such as COand PE, prompt you to
supply the partition on which to perform the action.
9. Enter PE(power enable) to turn on power to the cabinet, if it is not already turned on.
10. Enter PE(power enable) to turn on power to the partition.
11. Press Ctrl-B to return to the Main Menu; then enter COto connect to the console of the
partition.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 133
Download from Www.Somanuals.com. All Manuals Search And Download.
NOTE: If the console stops accepting input from the keyboard, the following message is
displayed:
[Read-only - use ^Ecf to attach to console.]
In that situation, press and hold down the Ctrl key and type the letter e. Release the Ctrl
key, and then type the letters cand fto reconnect to the console.
12. Do the following from the EFI Boot Manager screen, which is displayed during the power-up
of the node.
NOTE: For more information about how to work with these menus, refer to the HP Integrity
rx8620–32 Server Installation Guide, which was shipped with the hardware.
a. Choose the Boot Option Maintenance Menu on the EFI Boot Manager screen.
i. Choose Add a Boot Option.
ii. Choose Load File [Acpi(HWP0002,0)/Pci(1|0)/Mac(XXXXXXXXXXXX)], which is
the Gigabit Ethernet (GigE) port on the server.
iii. Enter the string Netbootas the description for the boot option. This entry is
required and must be set to the string Netboot.
iv. Enter Nfor No Boot Option when prompted for the Boot Option Data Type.
v. Choose the option to save the entry to NVRAM.
vi. Choose Exit to quit the Add a Boot Option menu.
b. Choose the option to Change Boot Order from the EFI Boot Manager screen.
i. Press the ukey on the keyboard to move the Netbootentry you just defined to
the top of the boot order.
ii. Save the setting to the NVRAM.
iii. Choose Exit to quit the Change Boot Order menu.
13. Enable console messages:
a. Choose the Select Active Console Output Devices option from the Boot Option
Maintenance menu to enable console messages to be displayed on the screen when
you turn on the system:
i. Set the Acpi(HWP0002,0)/Pci(0|1)/Uart(9600 N81)/VenMsg(Vt100+) option; this
is the only option that is set.
ii. Save the setting to the NVRAM.
iii. Choose Exit to return to the Boot Option Maintenance menu.
b. Choose the Select Active Console Input Devices option from the Boot Option
Maintenance Menu to enable console messages to be displayed on the screen when
you turn on the system:
i. Set the Acpi(HWP0002,0)/Pci(0|1)/Uart(9600 N81)/VenMsg(Vt100+) option; this
is the only option that is set.
ii. Save the setting to the NVRAM.
iii. Choose Exit to return to the Boot Option Maintenance menu.
c. Choose the Select Active Standard Error Devices option from the Boot Option
Maintenance menu to enable console messages to be displayed on the screen when
you turn on the system.
i. Set the Acpi(HWP0002,0)/Pci(0|1)/Uart(9600 N81)/VenMsg(Vt100+) option; this
is the only option that is set.
ii. Save the setting to the NVRAM.
iii. Choose Exit to return to the Boot Option Maintenance menu.
134 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
14. From the Boot Option Maintenance menu, add a boot option for the EFI Shell (if one does
not exist). Follow the instructions in step 12a.
15. Exit the Boot Option Maintenance menu.
16. Choose the EFI Shell boot option and boot to the EFI shell. Enter the following EFI shell
commands:
EFI> acpiconfig enable softpowerdown
EFI> acpiconfig single-pci-domain
EFI> reset
The resetcommand reboots the machine. You do not have to wait for this reboot to complete
before continuing to the next step to turn off power to the partition.
17. Turn off power to the partition; leave the cabinet power turned on:
1. Press Ctrl-B to exit out of console mode.
2. Enter CMto display the Command Menu.
3. Enter PEto turn off power to the partition.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems 135
Download from Www.Somanuals.com. All Manuals Search And Download.
4.10 Preparing the Hardware for CP6000BL Systems
Use the management processor (MP) to perform the following tasks on each server blade in the
hardware configuration after the head node is installed and the switches are discovered:
•
•
•
•
Clear all event logs
Enable IPMI over LAN
Create an MP login ID and password that matches all other devices
Add a boot entry for the string DVD booton the head node, and add a boot entry for the
string Netbooton all other nodes.
•
•
•
Move the DVD bootand Netbootboot entries to top of the boot order
Set the primary console
Set the console baud rate to 115200
Procedure
Perform the following procedure on each HP Integrity server blade. All nodes must be seated
in an enclosure, plugged in, and power turned off.
1. Connect the local IO cable (also called the SUV cable) to the server blade. The local cable is
shipped with the enclosure and connects to the server blade at one end and is divided into
VGA, USB, and Serial ports at the other end.
2. Connect a serial terminal or laptop serial port to the serial port of the local IO cable.
3. Use a terminal emulator, such as HyperTerminal, to open a terminal window.
4. Press the Enter key to access the MP. If there is no response, press the MP reset pin on the
back of the MP and try again.
5. Log in to the MP using the default administrator name and password that shown on the
screen. The MP Main Menu is displayed.
MP MAIN MENU:
CO: Console
VFP: Virtual Front Panel
CM: Command Menu
SMCLP: Server Management Command Line Protocol
CL: Console Log
SL: Show Event Logs
HE: Main Help Menu
X: Exit Connection
6. Enter SLto show event logs. Then, enter Cto clear all log files and yto confirm your action.
7. Enter CMto display the Command Menu.
8. Do the following to ensure that the IPMI over LAN option is set. This setting is required for
Nagios monitoring.
a. Enter SAto display the Set Access configuration menu.
b. Verify that the IPMI over LAN option is enabled.
c. Enable the IPMI over LAN option if it is disabled.
1. Enter the letter ito access the IPMI
2. Enter the letter eto enable the IPMI over LAN
3. Enter the letter yto confirm your action.
d. Return to the Command Menu.
9. Enter UC(user configuration) and use the menu options to remove the default administrator
and operator accounts. Then, for security purposes, create your own unique user login ID
and password. Assign all rights (privileges) to this new user.
The user login ID must have a minimum of 6 characters, and the password must have exactly
8 characters. You must set the same user login ID and password on every node and all MPs,
136 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
iLOs, and OAs must use the same user name and password. Do not use any special characters
as part of the password.
10. Turn on power to the node:
MP:CM> pc -on -nc
11. Press Ctrl-B to return to the MP Main Menu.
12. Enter COto connect to the console. It takes a few minutes for the live console to display.
13. Add a boot entry and set the OS boot order. Your actions for the head node differ from all
other nodes.
Table 4-47 Adding a Boot Entry and Setting the Boot Order on HP Integrity Server Blades
Head Node
All Other Nodes
Add a boot entry:
Add a boot entry:
1. From the Boot Menu screen, which is displayed
during the power on of the node, select the Boot
Configuration Menu.
1. From the Boot Menu screen, which is displayed
during the power on of the node, select the Boot
Configuration Menu.
2. Select Add Boot Entry.
2. Select Add Boot Entry.
3. Select Removable Media Boot as the network boot 3. Select Load File [Core LAN Port 1 ] as the network
choice.
boot choice, which is a Gigabit Ethernet (GigE) port.
4. Enter the string DVD bootas the boot option
4. Enter the string Netbootas the boot option
description. This entry is required and must be set
to the string Netboot(with a capital letter N).
description.
5. Press the Enter key twice for no db-profile and load
options.
5. Press the Enter key twice for no db-profile and load
options.
6. If prompted, save the entry to NVRAM.
6. If prompted, save the entry to NVRAM.
Set the boot order:
Set the boot order:
1. From the Boot Configuration menu, select the Edit 1. From the Boot Configuration menu, select the Edit
OS Boot Order option. OS Boot Order option.
2. Use the navigation instructions and the up arrow 2. Use the navigation instructions and the up arrow
on the screen to move the DVD bootentry you just
defined to the top of the boot order.
on the screen to move the Netbootentry you just
defined to the top of the boot order.
3. Press the Enter key to select the position.
3. Press the Enter key to select the position.
4. Press the x key to return to the Boot Configuration 4. Press the x key to return to the Boot Configuration
menu. menu.
For more information about how to work with these menus, see the documentation that
came with the HP Integrity server blade.
14. Perform this step on all nodes, including the head node. From the Boot Configuration
menu, select the Console Configuration option, and do the following:
a. Select Serial Acpi(HWP0002,PNP0A03,0)/Pci(1|2) Vt100+ 9600 as the primary console
interface.
b. Press the letter b key repeatedly until the baud rate is set to 115200.
c. Press the Esc key or press the x key as many times as necessary to return to the Boot
Menu.
d. If prompted, save the entry to NVRAM.
e. If prompted, reset the system.
15. Turn off power to the node:
a. Press Ctrl-b to exit the console mode.
b. Return to the Command Menu:
MP> CM
c. Turn off power to the node:
MP:CM> pc -off -nc
4.10 Preparing the Hardware for CP6000BL Systems 137
Download from Www.Somanuals.com. All Manuals Search And Download.
16. Use the RBcommand to reset the BMC.
17. Press Ctrl-b to exit the console mode and press the x key to exit.
After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation
Guide to discover all the nodes and enclosures in the HP XC system.
138 Preparing Individual Nodes
Download from Www.Somanuals.com. All Manuals Search And Download.
5 Troubleshooting
This chapter describes known problems with respect to preparing hardware devices for use with
the HP XC System Software and their solutions.
5.1 iLO2 Devices
5.1.1 iLO2 Devices Can Become Unresponsive
There is a known problem with the iLO2 console management devices that causes the iLO2
device to become unresponsive to certain tools including the HP XC power daemon and the
iLO2 Web interface. When this happens, the power daemon generates CONNECT_ERRORmessages.
Additional symptoms include the following:
•
•
Inability to use the iLO2 Web interface
Inability to control the node's boot options through the Onboard Administrator (OA) on HP
server blade enclosures.
When this problem occurs, the iLO2 device is not completely dead; only parts of it are hung.
You can clear up the problem using either of these methods:
•
•
Use the following procedure to restart and reboot the node:
1. Completely remove power from the node by either removing the power cord or, in the
case of an HP server blade, removing the server blade from the enclosure.
2. Wait 15 seconds.
3. Restore power to the node.
Restoring power restarts the iLO2 device and also reboots the node.
Use the following procedure to restart the iLO2 device without rebooting the node:
NOTE: This method can be used only if the iLO2 command line interface is not hung.
1. Use the telnetor sshcommand to access the hung iLO2 device.
2. Log in to the iLO2 device.
Use the HP XC user name and password that you defined for your console devices.
3. Reboot the iLO2 device:
reset map1
5.1 iLO2 Devices 139
Download from Www.Somanuals.com. All Manuals Search And Download.
140
Download from Www.Somanuals.com. All Manuals Search And Download.
A Establishing a Connection Through a Serial Port
Follow this generic procedure to establish a connection to a server using a serial port connection
to a console port. If you need more information about how to establish these connections, see
the hardware documentation.
1. Connect a null modem cable between the serial port on the rear panel of the server and a
COM port on the host computer.
2. Launch a terminal emulation program such as Windows HyperTerminal.
3. Enter a name for the connection, select an icon, and click OK.
4. Select the COMport on the host computer to which the serial cable is connected, and click
OK.
5. Make the following port settings:
a. Bits per second: 115200
b. Data bits: 8
c. Parity: None
d. Stop bits: 1
e. Flow control: None
6. Click OK.
141
Download from Www.Somanuals.com. All Manuals Search And Download.
142
Download from Www.Somanuals.com. All Manuals Search And Download.
B Server Blade Configuration Examples
This appendix contains illustrations and descriptions of fully cabled HP XC systems based on
interconnect type and server blade height.
The connections are color-coded, so consider viewing the PDF file online or printing this appendix
on a color printer to take advantage of the color coding.
B.1 Gigabit Ethernet Interconnect With Half-Height Server Blades
to the external network. Because the two NICs of the half-height server blade are already in use,
an Ethernet card was added to mezzanine bay 1 to allow for the external network connection on
the server blade. An interconnect module was added to bay 3.
On the non-blade server nodes, PCI Ethernet NICs are used for the external network connection.
Available ports vary based on hardware model. See the HP XC Hardware Preparation Guide for
more information about port assignments.
Figure B-1 Gigabit Ethernet Interconnect With Half-Height Server Blades
Ethernet PCI Cards
PCI SLOT
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
Interconnect bay 5
Interconnect bay 6
Interconnect bay 7
Interconnect bay 8
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Ethernet
Mezzanine
Card
Gigabit Ethernet Interconnect Switch
iLO2
NIC 1
NIC 2
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
ADMINISTRATOR
iLO2
KEY
Administration Network
Console Network
Cluster Interconnect Network
External Network
B.2 InfiniBand Interconnect With Full-Height Server Blades
only on the non-blade server nodes and the full-height server blades. On those server blades,
NIC3 is used for the connection to the external network. A VLAN is used to separate the external
network traffic from the administration network traffic on the switch in bay 1 to save the expense
of an additional Ethernet interconnect module in bay 2.
On the non-blade server nodes, the built-in NICs were used for the external network connection.
about port assignments.
B.1 Gigabit Ethernet Interconnect With Half-Height Server Blades 143
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure B-2 InfiniBand Interconnect With Full-Height Server Blades
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
PCI SLOT
NIC
NIC
NIC
NIC
MGT
MGT
InfiniBand PCI Cards
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
NIC 1
NIC 2
NIC 3
NIC 4
ADMIN NET VLAN
EXTERNAL NET VLAN
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
InfiniBand
Mezzanine
Cards
MEZZ 1
MEZZ 2
MEZZ 3
Ethernet Switch using VLANs
Interconnect bays
5 & 6 (double wide)
InfiniBand Interconnect Switch
iLO2
NIC 1
NIC 2
Interconnect bay 7
Interconnect bay 8
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
iLO2
ADMINISTRATOR
KEY
KEY
Administration Network
Administration Network
Console Network
Console Network
C
C
l
l
u
u
s
s
t
t
e
e
r
r
I
I
n
n
t
t
e
e
r
r
c
c
o
o
n
n
n
n
e
e
c
c
t
t
NNeettwwoorrkk
External Network
B.3 InfiniBand Interconnect With Mixed Height Server Blades
(page 143). The only exception is that in this configuration, the half-height server blades require
external connections as well. Because half-height blades have two NICs, you must use NIC2 for
the connection to the external network. This also means that an interconnect module is required
in bay 2.
On the non-blade server nodes, the built-in NICs are used for the external network connection.
about port assignments.
144 Server Blade Configuration Examples
Download from Www.Somanuals.com. All Manuals Search And Download.
Figure B-3 InfiniBand Interconnect With Mixed Height Server Blades
InfiniBand PCI Cards
PCI SLOT
PCI SLOT
PCI SLOT
Non-Blade Server
Non-Blade Server
PCI SLOT
NIC
NIC
NIC
NIC
MGT
MGT
Admin ProCurve 2800 Series Switch
Console ProCurve 2600 Series Switch
C-Class Blade Enclosure
Interconnect bay 1
Interconnect bay 2
Interconnect bay 3
Interconnect bay 4
NIC 1
NIC 2
NIC 3
NIC 4
MEZZ 1
MEZZ 2
MEZZ 3
Interconnect bays
5 & 6 (double wide)
InfiniBand Interconnect Switch
iLO2
NIC 1
NIC 2
Interconnect bay 7
Interconnect bay 8
InfiniBand
Mezzanine
Cards
MEZZ 1
MEZZ 2
External Public Network
ONBOARD
iLO2
ADMINISTRATOR
KEY
Administration Network
Console Network
Cluster Interconnect Network
External Network
B.3 InfiniBand Interconnect With Mixed Height Server Blades 145
Download from Www.Somanuals.com. All Manuals Search And Download.
146
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
A
administration
branch
The half (branch) of the administration network that contains all of the general-purpose
administration ports to the nodes of the HP XC system.
administration
network
The private network within the HP XC system that is used for administrative operations.
availability set
An association of two individual nodes so that one node acts as the first server and the other
node acts as the second server of a service.
See also improved availability, availability tool.
availability tool
A software product that enables system services to continue running if a hardware or software
failure occurs by failing over the service to the other node in an availability set.
See also improved availability, availability set.
B
base image
The collection of files and directories that represents the common files and configuration data
that are applied to all nodes in an HP XC system.
branch switch
A component of the Administration Network. A switch that is uplinked to the root switch and
receives physical connections from multiple nodes.
C
cluster
A set of independent computers combined into a unified system through system software and
networking technologies.
cluster alias
CMDB
The external cluster host name supported by LVS, which enables inbound connections without
having to know individual nodes names to connect and log in to the HP XC system.
Configuration and management database. Constructed during HP XC system installation, the
CMDB is a MySQL database that stores information about the nodes, hardware, and software
configuration, and network connectivity. This database runs on the node with the node
management role.
compute node
A node that is assigned only with the compute role and no other. Jobs are distributed to and
run on nodes with the computerole; no other services run on a compute node.
.
configuration and See CMDB.
management
database
console branch
A component of the administration network. The half (branch) of the administration network
that contains all of the console ports of the nodes of the HP XC system. This branch is established
as a separate branch to enable some level of partitioning of the administration network to
support specific security needs.
D
DHCP
Dynamic Host Control Protocol. A protocol that dynamically allocates IP addresses to computers
on a local area network.
Dynamic Host
Control Protocol
See DHCP.
E
EFI
Extensible Firmware Interface. Defines a model for the interface between operating systems
and Itanium-based platform firmware. The interface consists of data tables that contain
platform-related information, plus boot and run-time service calls that are available to the
147
Download from Www.Somanuals.com. All Manuals Search And Download.
operating system and its loader. Together, these provide a standard environment for booting
an operating system and running preboot applications.
enclosure
The hardware and software infrastructure that houses HP BladeSystem servers.
extensible
firmware
interface
See EFI.
external network A node that is connected to a network external to the HP XC system.
node
F
fairshare
An LSF job-scheduling policy that specifies how resources should be shared by competing
users. A fairshare policy defines the order in which LSF attempts to place jobs that are in a
queue or a host partition.
FCFS
First-come, first-served. An LSF job-scheduling policy that specifies that jobs are dispatched
according to their order in a queue, which is determined by job priority, not by order of
submission to the queue.
first-come,
first-served
See FCFS.
G
global storage
Storage within the HP XC system that is available to all of the nodes in the system. Also known
as local storage.
golden client
golden image
golden master
The node from which a standard file system image is created. The golden image is distributed
by the image server. In a standard HP XC installation, the head node acts as the image server
and golden client.
A collection of files, created from the golden client file system that are distributed to one or
more client systems. Specific files on the golden client may be excluded from the golden image
if they are not appropriate for replication.
The collection of directories and files that represents all of the software and configuration data
of an HP XC system. The software for any and all nodes of an HP XC system can be produced
solely by the use of this collection of directories and files.
H
head node
The single node that is the basis for software installation, system configuration, and
administrative functions in an HP XC system. There may be another node that can provide a
failover function for the head node, but HP XC system has only one head node at any one time.
host name
The name given to a computer. Lowercase and uppercase letters (a–z and A–Z), numbers (0–9),
periods, and dashes are permitted in host names. Valid host names contain from 2 to 63
characters, with the first character being a letter.
I
I/O node
A node that has more storage available than the majority of server nodes in an HP XC system.
This storage is frequently externally connected storage, for example, SAN attached storage.
When configured properly, an I/O server node makes the additional storage available as global
storage within the HP XC system.
iLO
Integrated Lights Out. A self-contained hardware technology available on CP3000 and CP4000
cluster platform hardware models that enables remote management of any node within a
system.
iLO2
The next generation of iLO that provides full remote graphics console access and remote virtual
media.
See also iLO.
148 Glossary
Download from Www.Somanuals.com. All Manuals Search And Download.
image server
A node specifically designated to hold images that will be distributed to one or more client
systems. In a standard HP XC installation, the head node acts as the image server and golden
client.
improved
availability
A service availability infrastructure that is built into the HP XC system software to enable an
availability tool to fail over a subset of eligible services to nodes that have been designated as
a second server of the service
See also availability set, availability tool.
Integrated Lights See iLO.
Out
interconnect
A hardware component that provides high-speed connectivity between the nodes in the HP
XC system. It is used for message passing and remote memory access capabilities for parallel
applications.
interconnect
module
A module in an HP BladeSystem server. The interconnect module provides the Physical I/O
ports for the server blades and can be either a switch, with connections to each of the server
blades and some number of external ports, it can be or a pass-through module, with individual
external ports for each of the server blades.
See also server blade.
interconnect
network
The private network within the HP XC system that is used primarily for user file access and
for communications within applications.
Internet address
A unique 32-bit number that identifies a host's connection to an Internet network. An Internet
address is commonly represented as a network number and a host number and takes a form
similar to the following: 192.0.2.0.
IPMI
ITRC
Intelligent Platform Management Interface. A self-contained hardware technology available
on HP ProLiant DL145 servers that enables remote management of any node within a system.
HP IT Resource Center. The HP corporate Web page where software patches are made available.
must register as an Americas/Asia Pacific or European customer.
L
Linux Virtual
Server
See LVS.
load file
A file containing the names of multiple executables that are to be launched simultaneously by
a single command.
Load Sharing
Facility
See LSF-HPC with SLURM.
local storage
Storage that is available or accessible from one node in the HP XC system.
LSF execution
host
The node on which LSF runs. A user's job is submitted to the LSF execution host. Jobs are
launched from the LSF execution host and are executed on one or more compute nodes.
LSF master host
The overall LSF coordinator for the system. The master load information manager (LIM) and
master batch daemon (mbatchd) run on the LSF master host. Each system has one master host
to do all job scheduling and dispatch. If the master host goes down, another LSF server in the
system becomes the master host.
LSF-HPC with
SLURM
Load Sharing Facility for High Performance Computing integrated with SLURM. The batch
system resource manager on an HP XC system that is integrated with SLURM. LSF-HPC with
SLURM places a job in a queue and allows it to run when the necessary resources become
available. LSF-HPC with SLURM manages just one resource: the total number of processors
designated for batch processing.
LSF-HPC with SLURM can also run interactive batch jobs and interactive jobs. An LSF interactive
batch job allows you to interact with the application while still taking advantage of LSF-HPC
with SLURM scheduling policies and features. An LSF-HPC with SLURM interactive job is run
without using LSF-HPC with SLURM batch processing features but is dispatched immediately
by LSF-HPC with SLURM on the LSF execution host.
See also LSF execution host.
149
Download from Www.Somanuals.com. All Manuals Search And Download.
LVS
Linux Virtual Server. Provides a centralized login capability for system users. LVS handles
incoming login requests and directs them to a node with a login role.
M
Management
Processor
See MP.
master host
MCS
See LSF master host.
An optional integrated system that uses chilled water technology to triple the standard cooling
capacity of a single rack. This system helps take the heat out of high-density deployments of
servers and blades, enabling greater densities in data centers.
Modular Cooling See MCS.
System
module
A package that provides for the dynamic modification of a user's environment by means of
modulefiles.
See also modulefile.
modulefile
MP
Contains information that alters or sets shell environment variables, such as PATHand MANPATH.
Modulefiles enable various functions to start and operate properly.
Management Processor. Controls the system console, reset, and power management functions
on HP Integrity servers.
MPI
Message Passing Interface. A library specification for message passing, proposed as a standard
by a broadly based committee of vendors, implementors, and users.
MySQL
A relational database system developed by MySQL AB that is used in HP XC systems to store
and track system configuration information.
N
NAT
Network Address Translation. A mechanism that provides a mapping (or transformation) of
addresses from one network to another. This enables external access of a machine on one LAN
that has the same IP address as a machine on another LAN, by mapping the LAN address of
the two machines to different external IP addresses.
Network Address See NAT.
Translation
Network
Information
Services
See NIS.
NIS
Network Information Services. A mechanism that enables centralization of common data that
is pertinent across multiple machines in a network. The data is collected in a domain, within
which it is accessible and relevant. The most common use of NIS is to maintain user account
information across a set of networked hosts.
NIS client
Any system that queries NIS servers for NIS database information. Clients do not store and
maintain copies of the NIS maps locally for their domain.
NIS master server A system that stores the master copy of the NIS database files, or maps, for the domain in the
/var/yp/DOMAINdirectory and propagates them at regular intervals to the slave servers. Only
the master maps can be modified. Each domain can have only one master server.
NIS slave server
A system that obtains and stores copies of the master server's NIS maps. These maps are updated
periodically over the network. If the master server is unavailable, the slave servers continue to
make the NIS maps available to client systems. Each domain can have multiple slave servers
distributed throughout the network.
O
OA
The enclosure management hardware, software, and firmware that is used to support all of the
managed devices contained within the HP BladeSystem c-Class enclosure.
150 Glossary
Download from Www.Somanuals.com. All Manuals Search And Download.
onboard
administrator
See OA.
P
parallel
application
An application that uses a distributed programming model and can run on multiple processors.
An HP XC MPI application is a parallel application. That is, all interprocessor communication
within an HP XC parallel application is performed through calls to the MPI message passing
library.
PXE
Preboot Execution Environment. A standard client/server interface that enables networked
computers that are not yet installed with an operating system to be configured and booted
remotely. PXE booting is configured at the BIOS level.
R
remote graphics
software
See RGS.
resource
management role
Nodes with this role manage the allocation of resources to user applications.
RGS
HP Remote Graphics Software. A utility that enables remote access and sharing of a graphics
workstation desktop.
role
A set of services that are assigned to a node.
Root
Administration
Switch
A component of the administration network. The top switch in the administration network; it
may be a logical network switch comprised of multiple hardware switches. The Root Console
Switch is connected to the Root Administration Switch.
root node
RPM
A node within an HP XC system that is connected directly to the Root Administration Switch.
Red Hat Package Manager.
1. A utility that is used for software package management on a Linux operating system, most
notably to install and remove software packages.
2. A software package that is capable of being installed or removed with the RPM software
package management utility.
S
scalable
visualization
array
See SVA.
serial application A command or user program that does not use any distributed shared-memory form of
parallelism. A serial application is basically a single-processor application that has no
communication library calls (for example, MPI, PVM, GM, or Portals).
An example of a serial application is a standard Linux command, such as the lscommand.
Another example of a serial application is a program that has been built on a Linux system that
is binary compatible with the HP XC environment, but does not contain any of the HP XC
infrastructure libraries.
server blade
One of the modules of an HP BladeSystem. The server blade is the compute module consisting
of the CPU, memory, I/O modules and other supporting hardware. Server blades do not contain
their own physical I/O ports, power supplies, or cooling.
SLURM backup
controller
The node on which the optional backup slurmctlddaemon runs. On SLURM failover, this
node becomes the SLURM master controller.
SLURM master
controller
The node on which the slurmctlddaemon runs.
SMP
Symmetric multiprocessing. A system with two or more CPUs that share equal (symmetric)
access to all of the facilities of a computer system, such as the memory and I/O subsystems. In
151
Download from Www.Somanuals.com. All Manuals Search And Download.
an HP XC system, the use of SMP technology increases the number of CPUs (amount of
computational power) available per unit of space.
ssh
Secure Shell. A shell program for logging in to and executing commands on a remote computer.
It can provide secure encrypted communications between two untrusted hosts over an insecure
network.
standard LSF
A workload manager for any kind of batch job. Standard LSF features comprehensive workload
management policies in addition to simple first-come, first-serve scheduling (fairshare,
preemption, backfill, advance reservation, service-level agreement, and so on). Standard LSF
is suited for jobs that do not have complex parallel computational needs and is ideal for
processing serial, single-process jobs. Standard LSF is not integrated with SLURM.
SVA
HP Scalable Visualization Array. A highly affordable, scalable, ready-to-run visualization
solution that completes the HP Unified Cluster Portfolio's integration of computation, data
management and visualization in a single, integrated cluster environment. The HP SVA solution
adds high-performance HP workstations in building block configurations that combine with
industry-standard visualization components. State-of-the-art industry standard and open source
clustering, graphics, and networking technology are leveraged to reduce costs and enhance
flexibility. The tight integration of scalable computation, data management and visualization
enables the following: clustered parallel visualization applications with support for very large
data sets, display of complex, high resolution images, including volume visualization, and
real-time rendering with computational steering through closely coupled visualization,
computation and data management.
symmetric
multiprocessing
See SMP.
152 Glossary
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
CP3000BL, 19
A
administration network
console branch, 31
application cabinet, 45
architecture (see processor architecture)
hardware preparation tasks, 84
HP ProLiant BL260c G5 Server Blade, 20
HP ProLiant BL2x220c G5 Server Blade, 20
HP ProLiant BL460c G5 Server Blade, 20
HP ProLiant BL480c G5 Server Blade, 20
HP ProLiant BL680c G5 Server Blade, 20
CP4000, 19
B
baseboard management controller (see BMC)
BIOS settings
HP ProLiant DL140 G2, 65
HP ProLiant DL140 G3, 66
HP ProLiant DL145, 88
HP ProLiant DL145 G2, 91
HP ProLiant DL145 G3, 92
HP ProLiant DL160 G5, 68
HP ProLiant DL165 G5, 94
HP ProLiant DL360 G4, 69
HP ProLiant DL360 G5, 71
HP ProLiant DL380 G4, 74
HP ProLiant DL380 G5, 75
HP ProLiant DL580 G4, 78
HP ProLiant DL580 G5, 80
HP xw8200 workstation, 81
HP xw8400 workstation, 82
HP xw8600 workstation, 83
BMC, 29
hardware preparation tasks, 87
CP4000BL
hardware preparation tasks, 119
HP ProLiant BL465c G5 Server Blade, 20
HP ProLiant BL465c Server Blade, 20
HP ProLiant BL685c G5 Server Blade, 20
HP ProLiant BL685c Server Blade, 20
CP6000, 19
BMC firmware, 59
C
c3000 enclosure, 25
c7000 server blade enclosure , 22
cabinet, 45
chip architecture (see processor architecture)
cluster platform (see CP3000) (see CP3000BL) (see CP4000)
(see CP6000)
hardware preparation tasks, 122
CP6000BL
supported, 19
CONNECT_ERROR, 139
console branch network, 31
console management devices, 29
console network
defined, 38
hardware preparation tasks, 136
HP Integrity BL860c Server Blade, 20
core IO board 0, 133
CP3000, 19
hardware preparation tasks, 63
D
documentation
additional publications, 16
compilers, 16
FlexLM, 15
HowTo, 12
153
Download from Www.Somanuals.com. All Manuals Search And Download.
HP XC System Software, 12
Linux, 15
CP300BL, 84
CP4000 , 87
LSF, 14
CP400BL, 119
manpages, 17
master firmware list, 12
Modules, 15
CP6000, 122
CP600BL, 136
for all cluster platforms, 61
HP xw8200 workstation, 80
HP xw8400 workstation, 80
HP xw8600 workstation, 82
HP xw9300 workstation, 116
HP xw9400 workstation, 116
hardware preparation tasks
HCA, 58
MPI, 16
MySQL, 15
Nagios, 14
pdsh, 15
reporting errors in, 17
rrdtool, 14
SLURM, 14
software RAID, 16
Supermon, 15
syslog-ng, 15
SystemImager, 15
TotalView, 16
head node
in utility cabinet, 45
high-speed interconnects, 56
host channel adapter (see HCA)
HowTo
E
Web site, 12
HP documentation
EFI boot manager
CP6000 systems, 124
EFI firmware, 59
ELAN4 (see QsNet)
setup guidelines, 24
Ethernet ports
providing feedback for, 17
HP ProLiant DL365 G5, 20
head node, 61
external network
creating VLANs, 41
defined, 41
NIC use, 41
external storage, 45
F
feedback
e-mail address for documentation, 17
firmware
BMC, 59
InfiniBand, 59
IPMI, 59
master list, 59
Myrinet, 59
Quadrics, 59
system, 59
system BIOS, 59
G
H
hardware configuration
supported, 21
hardware models
hardware preparation
154 Index
Download from Www.Somanuals.com. All Manuals Search And Download.
L
large-scale system
defined, 32
lights-out 100 (see LO-100i)
line monitoring card
connection, 50
LO-100i, 29
LSF
I
documentation, 14
enabling telnet, 29
M
iLO settings
management processor (see MP)
manpages, 17
HP ProLiant D360 G4, 69
HP ProLiant DL360 G5, 71
HP ProLiant DL365, 95
HP ProLiant DL365 G5, 98
HP ProLiant DL380 G4, 73
HP ProLiant DL385, 101
HP ProLiant DL385 G5, 73
HP ProLiant DL580 G4, 77
HP ProLiant DL580 G5, 79
HP ProLiant DL585, 107
HP ProLiant DL585 G2, 107
HP ProLiant DL785 G5, 114
iLO2
mezzanine cards, 28
defined, 29
features, 29
MP firmware, 59
Myrinet
interface cards revision, 59
N
network
administration console branch, 31
console, 38
CONNECT_ERROR, 139
defined, 29
features, 29
external, 41
web interface, 139
network cabling, 33
network configuration, 33
nodes
InfiniBand interconnect, 40
insight display
defined, 28
maximum number in system, 32
maximum number of, 56
Integrated Lights-Out 2 (see iLO2)
Intelligent platform management interface (see IPMI)
interconnect
O
connections, 56
OFED, 58
console connection, 50
onboard administrator
defined, 27
network, 31
on administration network, 57
setting the password, 62
OpenFabrics Enterprise Distribution (see OFED)
P
interconnect bay port mapping, 28
interconnect module, 28
interconnect network
defined, 39
password
onboard administrator, 62
ProCurve switch administrator, 47
PCI-X, 56
Gigabit Ethernet, 39
InfiniBand, 40
running on administration network, 41
IP address
port connections
branch administration switch, 54
branch console switch, 55
interconnect switch, 56
root administration switch, 50
root console switch, 51
super root switch, 49
IPMI, 29
firmware, 59
155
Download from Www.Somanuals.com. All Manuals Search And Download.
processor architecture, 19
AMD Opteron, 19
Intel Itanium, 19
software RAID
Intel Xeon with EM64T, 19
processor architectures, 20
ProCurve 2610-24
ProCurve 2610-48
branch console switch, 55
ProCurve 2626, 47
ProCurve 2650, 47
documentation, 16
storage, 45
in large-scale system, 48
supported cluster platforms, 19
supported console management devices, 29
supported HP ProLiant server blade models, 21
supported interconnects, 31
supported server blade combinations, 21
supported server models, 20
supported switch models, 47
switch
branch console switch, 55
root console switch, 51
ProCurve 2824, 47
branch administration switch, 54
root administration switch, 50
super root switch, 49
choices, 45
ProCurve 2848, 47
connections for workstations, 49
port connections for large-scale systems, 48
branch administration switch, 54
root administration switch, 50
super root switch, 49
ProCurve switch
administrator password, 47
public network (see external network)
specialized use, 46
supported models, 47
Q
system firmware, 59
QsNet, 57
interconnect, 31
Quadrics (see QsNet)
T
telnet
enabling on iLO devices, 29
trunking, 45
R
RBSU settings
HP ProLiant DL365, 96
HP ProLiant DL365 G5, 99
HP ProLiant DL385, 103
HP ProLiant DL385 G2, 104
HP ProLiant DL585, 109
HP ProLiant DL585 G2, 110
HP ProLiant DL785 G5, 115
region
port use on large-scale systems, 49
U
utility cabinet, 45
V
virtual local area network (see VLAN)
VLAN
defined, 32
creating, 44
defined, 41
reporting documentation errors
feedback e-mail address for, 17
W
Web site
HP XC System Software documentation, 12
workstation, 80
HP xw8200, 80
HP xw8400, 80
HP xw8600, 82
HP xw9300, 116
HP xw9400, 116
S
server blade
defined, 19
preparing HP Integrity nodes, 136
server blade combinations, 21
server blade enclosure
c3000, 25
c7000, 22
156 Index
Download from Www.Somanuals.com. All Manuals Search And Download.
157
Download from Www.Somanuals.com. All Manuals Search And Download.
*A-XCHWP-321c*
Printed in the US
Download from Www.Somanuals.com. All Manuals Search And Download.
|
Grundig Flat Panel Television LXW 82 9622 DL User Manual
HP Hewlett Packard Boat BH9004E User Manual
HP Hewlett Packard Laptop 4310S User Manual
HP Hewlett Packard Printer 2500C CM User Manual
HP Hewlett Packard Telephone Premium Travel Phone User Manual
Humax Flat Panel Television LGB 19DTT User Manual
Hunter Fan Fan 23782 User Manual
Hypertec Carrying Case N6388LHY User Manual
Impex Home Gym IGS 412 User Manual
i Tech Dynamic Headphones iSlider User Manual