HP Hewlett Packard HP Integrity Superdome and HP 9000 Superdome sx2000 User Manual

User Service Guide  
HP Integrity Superdome/sx2000 and HP 9000  
Superdome/sx2000 Servers  
HP Part Number: A9834-9001D_ed6  
Published: September 2009  
Edition: 6  
Table of Contents  
Table of Contents  
3
4
Table of Contents  
Table of Contents  
5
6
Table of Contents  
Table of Contents  
7
List of Figures  
8
List of Figures  
9
List of Tables  
10  
List of Tables  
List of Examples  
11  
12  
About This Document  
This document contains the system overview, system-specific parameters, installation procedures  
of the system, operating system specifics, and procedures for components in the system.  
Intended Audience  
This document is intended for HP trained Customer Support Consultants.  
Document Organization  
This document is organized as follows:  
Chapter 1  
This chapter presents an historical view of the Superdome server family,  
describes the various server components, and describes how the server  
components function together.  
Chapter 2  
This chapter contains the dimensions and weights for the server and various  
components. Electrical specifications, environmental requirements, and templates  
are also included.  
Chapter 3  
This chapter describes how to unpack and inspect the system, set up the system,  
connect the MP to the customer LAN, and how to complete the installation.  
Chapter 4  
This chapter describes how to boot and shut down the server operating system  
(OS) for each OS supported.  
Appendix A  
Appendix B  
This appendix contains tables that describe the various LED states for the front  
panel, power and OL* states, and OL* states for I/O chassis cards.  
This appendix provides a summary for each management processor (MP)  
command. Screen output is provided for each command so you can see the  
results of the command.  
Appendix C  
Appendix D  
This appendix provides procedures to power off and power on the system when  
the removal and replacement of a component requires it.  
This appendix contains templates for cable cutouts and caster locations; SD16,  
SD32, SD64, and I/O expansion cabinets; and the computer room floor.  
Typographic Conventions  
The following typographic conventions are used in this document.  
WARNING! Lists requirements that you must meet to avoid personal injury.  
CAUTION: Provides information required to avoid losing data or to avoid losing system  
functionality.  
IMPORTANT: Provides essential information to explain a concept or to complete a task.  
NOTE: Highlights useful information such as restrictions, recommendations, or important  
details about HP product features.  
Commands and optionsare represented using this font.  
Text that you type exactly as shownis represented using this font.  
Intended Audience  
13  
       
Text to be replaced with text that you supply is represented using this font.  
Example: “Enter the ls -l filename command” means you must replace filename with your  
own text.  
Keyboard keys and graphical interface items (such as buttons, tabs, and menu items)  
are represented using this font.  
Examples: The Control key, the OK button, the General tab, the Options menu.  
Menu > Submenu represents a menu selection you can perform.  
Example: “Select the Partition > Create Partition action” means you must select the  
Create Partition menu item from the Partition menu.  
Example screen outputis represented using this font.  
Related Information  
Further information on HP server hardware management, Microsoft® Windows®, and diagnostic  
support tools are available through the following website links.  
Website for HP Technical Documentation The following link is the main website for HP technical  
documentation. This site offers comprehensive information about HP products available for free.  
Server Hardware Information The following link is the systems hardware section of the  
docs.hp.com website. It provides HP nPartition server hardware management information,  
including information on site preparation, installation, and so on. See http://docs.hp.com/hpux/  
hw/.  
Diagnostics and Event Monitoring: Hardware Support Tools The following link contains  
comprehensive information about HP hardware support tools, including online and offline  
diagnostics and event monitoring tools. This website has manuals, tutorials, FAQs, and other  
reference material. See http://docs.hp.com/hpux/diag.  
Website for HP Technical Support The following link is the HP IT resource center website and  
provides comprehensive support information for IT professionals on a wide variety of topics,  
including software, hardware, and networking. See http://us-sup port2.external.hp.com.  
Publishing History  
The document printing date and edition number indicate the documents current edition and  
are included in the following table. The printing date will change when a new edition is produced.  
Document updates may be issued between editions to correct errors or document product changes.  
The latest version of this document is available on line at:  
First Edition  
Second Edition  
Third Edition  
Fourth Edition  
Fifth Edition  
Sixth Edition  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  
March 2006  
September 2006  
February 2007  
November 2007  
March 2009  
September 2009  
14  
   
HP Encourages Your Comments  
HP welcomes your feedback on this publication. Direct your comments to http://docs.hp.com/  
en/feedback.html and note that you will not receive an immediate reply. All comments are  
appreciated.  
HP Encourages Your Comments  
15  
 
16  
1 Overview  
Server History and Specifications  
Superdome was introduced as the new platform architecture for high-end HP servers between  
the years 2000 and 2004. Superdome represented the first collaborative hardware design effort  
between traditional HP and Convex technologies. Superdome was designed to replace T- and  
V-Class servers and to prepare for the transition from PA-RISC to Intel® Itanium® processors.  
The new design enabled the ability of running different operating systems on the same server.  
The design also included several new, high-availability features. Initially, Superdome was released  
with the legacy core electronics complex (CEC) and a 552 MHz PA-8600 processor. The Legacy  
CEC supported two additional speeds; a 750 MHz PA-8700 followed by an 875 MHz PA-8700  
processor.  
The HP Integrity server project consisted of four projects based on the sx1000 CEC chipset and  
the Integrity cell boards. The first release was the sx1000 chipset, Integrity cell boards, Itanium  
firmware and a 1.2 MHz Intel® processor. This release included PCI-X and PCI I/O mixes. The  
Integrity systems were compatible with the legacy Superdome IOX.  
The second release, based on the sx1000 CEC, included Integrity cell boards, but used PA-RISC  
firmware, and a dual-core PA-RISC processor. The release also included a 2 GB DIMM and a  
new HP-UX version. Components such as processors, processor power pods, memory, firmware,  
and operating system all changed for this release.  
Figure 1-1 Superdome History  
The third release, also based on the sx1000 chipset, included the Integrity cell boards, Itanium  
firmware, and a 1.5 MHz Itanium CPU. The CPU module consisted of a dual-core processor with  
a new cache controller. The firmware allowed for mixed cells within a system. All three DIMM  
sizes were supported. Firmware and operating system changes were minor compared to their  
earlier versions.  
The fourth and final release is the HP super scalable sx2000 processor chipset. It is also based on  
the new CEC that supports up to 128 PA-RISC or Itanium processors. It is the last generation of  
Superdome servers to support the PA-RISC family of processors. Modifications to the server  
components include:  
Server History and Specifications  
17  
         
the new CEC chipset  
board changes including cell board  
system backplane  
I/O backplane  
associated power boards  
interconnect  
a redundant, hot-swappable clock source  
Server Components  
A Superdome system consists of the following types of cabinet assemblies:  
Minimum of one Superdome left-side cabinet. The Superdome cabinet contains the processors,  
the memory, and the core devices of the system. They also house the system's PCI cards.  
Systems can include both left and right cabinet assemblies containing a left or right backplane  
(SD64) respectively.  
One or more HP Rack System/E cabinets. These rack cabinets are used to hold the system  
peripheral devices such as disk drives.  
Optionally, one or more I/O expansion cabinets (Rack System/E). An I/O expansion cabinet  
is required when a customer requires more PCI cards than can be accommodated in the  
Superdome cabinets.  
The width of the cabinet assemblies accommodates moving them through standard-sized  
doorways. The intake air to the main (cell) card cage is filtered. This air filter is removable for  
cleaning and replacement while the system is fully operational.  
A status display is located on the outside of the front and rear doors of each cabinet. This feature  
enables you to determine the basic status of each cabinet without opening any cabinet doors.  
The Superdome is a cell-based system. Cells communicate with others utilizing the crossbar on  
the backplane. Every cell has its own I/O interface, which can be connected to one 12-slot I/O  
card cage using two System Bus Adapter (SBA) link cables. Not all SBA links are connected by  
default, due to a physical limitation of four I/O card cages per cabinet or node. In addition to  
these components, each system consists of a power subsystem and a utility subsystem. Three  
types of Superdome are available:  
SD16  
SD32  
SD64, a two-cabinet system with single-CPU cell board sockets  
The SD## represents the maximum number of available CPU sockets.  
An SD16 contains the following components:  
Up to four cell boards  
Four I/O card cages  
Five I/O fans  
Four system cooling fans  
Four bulk power supplies (BPS)  
Two power distribution control assemblies (PDCA)  
Two backplane N+1 power supplies provide power to the SD16. The four cell boards are connected  
to one pair of crossbar chips (XBC). The backplane of an SD16 is the same as a backplane of an  
SD32. On the HUCB utility PCB is a switch set to TYPE= 1.  
An SD32 has up to eight cell boards. All eight cell boards are connected to two pairs of XBCs.  
The SD32 backplane is designed for a system upgrade to an SD64. On an SD32, four of the eight  
connectors use U-Turn cables. The U-Turn cables double the number of links and the bandwidth  
between the XBCs and are recommended to achieve best performance. An SD64 has up to 16 cell  
boards and requires two cabinets. All 16 cell boards are connected to four pairs of XBCs. The  
18  
Overview  
             
SD64 consists of left backplane and right backplane cabinets, which are connected using 12  
m-Link cables.  
When the PA-RISC dual-core or the Itanium dual-core processors are used, the CPU counts are  
doubled by the use of the dual-die processors, as supported on the Intel® Itanium® cell boards.  
Up to 128 processors can be supported.  
Figure 1-2 Superdome Cabinet Components  
Power Subsystem  
The power subsystem consists of the following components:  
One or two PDCAs  
One Front End Power Supply (FEPS)  
Up to six BPS  
One power board per cell  
An HIOB power system  
Backplane power bricks  
Power monitor (PM) on the Universal Glob of Utilities (UGUY)  
Local power monitors (LPM) on the cell, the HIOB, and the backplanes  
Power Subsystem  
19  
               
AC Power  
The ac power system includes the PDCA, one FEPS, and up to six BPS.  
The FEPS is a modular, 2n+2 shelf assembly power system that can consume up to 17 KVA of  
power from ac sources. The purpose of the FEPS chassis is to provide interconnect, signal and  
voltage busing between the PDCAs and BPSs, between the BPSs and utility subsystem, and  
between the BPS and the system power architecture. The FEPS subsystem comprises three distinct  
modular assemblies: six BPS, two PDCAs, and one FEPS chassis.  
At least one 3-phase PDCA per Superdome cabinet is required. For redundancy, you can use a  
second PDCA. The purpose of the PDCA is to receive a single 3-phase input and output three  
1-phase outputs with a voltage range of 200 to 240 volts regardless of the ac source type. The  
PDCA also provides a convenience disconnect switch/circuit breaker for service, test points, and  
voltage present LED indicators. The PDCA is offered as a 4-wire or a 5-wire PDCA device.  
Separate PDCAs (PDCA-0 and PDCA-1) can be connected to 4-wire and 5-wire input source  
simultaneously as long as the PDCA internal wiring matches the wiring configuration of the ac  
source.  
The 4-wire PDCA is used in a phase to phase voltage range of 200 to 240 volts at 50/60 Hz. This  
PDCA is rated for a maximum input current of 44 Amps per phase. The ac input power line to  
the PDCA is connected with power plugs or is hardwired. When using power plugs, use a power  
cord [OLFLEX 190 (PN 6008044) four conductor 6-AWG (16 mm), 600 V, 60 Amp, 90˚C, UL and  
CSA approved, conforms to CE directives GN/YW ground wire].  
When installing cables in locations that have been designated as “air handling spaces” (under  
raised flooring or overhead space used for air supply and air return), advise the customer to  
specify the use of data cables that contain a plenum rating. Data cables with this rating have been  
certified for FLAMESPREAD and TOXICITY (low smoke emissions). Power cables do not carry  
a plenum rating, they carry a data processing (DP) rating. Power cables installed in air handling  
spaces should be specified with a DP rating. Details on the various levels of the DP rating system  
are found in the National Electric Code (NEC) under Article 645.  
The following recommend plugs for the 4-wire PDCA:  
In-line connector: Mennekes ME 460C9, 3-phase, 4-wire, 60 Amp, 250 V, UL approved, color  
blue, IEC309-1 grounded at 9:00 o'clock.  
Panel-mount receptacle: Mennekes ME 460R9, 3-phase, 4-wire, 60 Amp, 250 V, UL approved,  
color blue, IEC309-1 grounded at 9:00 o'clock.  
The 5 wire PDCA is used in a phase-to-neutral voltage range of 200 to 240 V ac 50/60Hz. This  
PDCA is rated for a maximum input current of 24 Amps per phase. The ac input power line to  
the PDCA is connected with power plugs or is hardwired. When using power plugs, a power  
cord [five conductors, 10-AWG (6 mm), 450/475 V, 32 Amps, <HAR< European wire cordage,  
GN/YW ground wire]. Alternatively the customer can provide the power plug including the  
power cord and the receptacle. Recommended plugs are as follows:  
Inline connector: Mennekes ME532C6-16, 3-phase, 5-wire, 32 Amps, 450/475 V, VDE certified,  
color red, IEC309-1, IEC309-2, grounded at 6:00 o'clock.  
Panel-mount receptacle: Mennekes ME532R6-1276, 3-phase, 5-wire, 32 Amp, 450/475 V, VDE  
certified, color red, IEC309-1, IEC309-2, grounded at 6:00 o'clock.  
FUSE per phase: 25 Amp (valid for Germany).  
DC Power  
Each power supply output provides 48 V dc up to 60 A (2.88 kVA) and 5.3 V dc housekeeping.  
Normally an SD32 Superdome cabinet contains six BPS independent from the installed number  
of cells and I/O. An SD16 normally has four BPS installed.  
20  
Overview  
               
Power Sequencing  
The power on sequence is as follows:  
1. When the main power circuit breaker is turned on, the housekeeping (HKP) voltage turns  
on first and provides 5.3 V dc to the UGUY, Management Processor (MP), system backplane,  
cells, and all HIOB. Each BPS provides 5.3 V.  
2. When HKP voltage is on the MP performs the following steps:  
a. De-asserts the Reset and begins to boot SBC.  
b. Loads VxWorks from flash (can be viewed from the local port).  
c. Completes the SBC, single board computer hub (SBCH) power-on self-test (POST)  
begins, and LED start activity appears.  
d. Loads firmware from Compact Flash to RAM.  
e. SBCH POST completes. The heartbeat light blinks. USB LEDs turn on later.  
f. CLU POST and PM POST immediately after power on.  
3. After MP POST completes, the MP configures the system.  
4. The CLU POST completes.  
5. When PM POST completes, the system takes several steps.  
6. When the MP finishes the system configuration, it becomes operational and completes  
several tasks.  
7. When the PDHC POST completes, it becomes operational and completes its tasks.  
When the MP, CLU, and PM PDHC POST completes, utilities entities run their main loops.  
Enabling 48 Volts  
The PM must enable +48 V first , but it must obtain permission from the MP. To enable 48 V, the  
transition cabinet power switch must be moved from OFF to ON. Alternatively you can use the  
MP Command peif the power switch is already ON. If the switch is ON, the cabinet wakes up  
from Power on Reset).  
If the PM has permission, it sends a PS_CTL_L signal to the FEPS. Then the BPS enables +48 V  
converters, which send +48 V to the backplane, I/O chassis, HUCB, cells, fans, and blowers. Once  
the +48 V is enabled, it is cabled to the backplane, cells, and I/O chassis.  
Cooling System  
The Superdome has four blowers and five I/O fans per cabinet. These components are all  
hot-swappable. All have LEDs indicating their current status. Temperature monitoring occurs  
for the following:  
Inlet air for temperature increases above normal  
BPS for temperature increases above normal  
The I/O power board over temperature signal is monitored  
The inlet air sensor is on the main cabinet, located near the bottom of cell 1 front. The inlet air  
sensor and the BPS sensors are monitored by the power monitor 3 (PM3) on the UGUY, and the  
I/O power board sensors are monitored by the CLU on the UGUY.  
The PM controls and monitors the speed of groups of N+1 redundant fans. In a CPU cabinet, fan  
group 0 consists of the four main blowers and fan group 1 consists of the five I/O fans. In an I/O  
Expansion (IOX) cabinet, fan groups 0–3 consist of four I/O fans and fan group 4 consists of two  
management subsystem fans. All fans are expected to be populated at all times with the exception  
of the OLR of a failed fan.  
The main blowers feature a variable speed control. The blowers operate at full speed; available  
circuitry can reduce the normal operating speed. All of the I/O fans and managed fans run at  
one speed.  
Cooling System  
21  
                       
One minute after setting the main blower fan Reference to the desired speed or powering on the  
cabinet, the PM uses the tach select register to cycle through each fan and measure its speed.  
When a fan is selected, Timer 1 is used in counter mode to count the pulses on port T1 over a  
period of one second. If the frequency does not equal the expected frequency plus some margin  
of error, the fan is considered to have failed and is subtracted from the working fan count.  
If the failure causes a transition to N- I/O or main fans in a CPU cabinet, the cabinet is immediately  
powered off. If the failure causes a transition to N- I/O fans in an IOX cabinet, the I/O backplanes  
contained in the I/O Chassis Enclosure (ICE) containing that fan group are immediately powered  
off.  
Only inlet temperature increases are monitored by HP-UX; all other high temperature increase  
chassis codes do not activate the envddaemon to act as configured in the /etc/envd.conf  
file. The PM monitors ambient inlet temperature. The PM polls an analog-to-digital converter to  
read the current ambient temperature. The temperature falls into one of four ranges: Normal,  
OverTempLow, OverTempMid, or OverTempHigh. The following state codes describe the actions  
taken based on the various temperature state transitions:  
OTL_THRESHOLD = 32C -----> Send error code PDC_IPR_OLT  
OTM_THRESHOLD = 38C ----> Send error code PDC_INT_OTM  
OTH_THRESHOLD = 40C -----> Shut down 48 V  
NOTE: In an IOX cabinet, the thresholds are set two degrees higher to compensate for the fact  
that the cabinet sensor is mounted in a hot spot.  
Utilities Subsystem  
The Superdome utilities subsystem is comprised of a number of hardware and firmware  
components located throughout the Superdome system.  
Platform Management  
The sx2000 platform management subsystem consists of a number of hardware and firmware  
components located throughout the sx2000 system. The sx2000 uses the sx1000 platform  
management components, with firmware changes to support new functionality.  
The following list describes the major hardware components of the platform management  
subsystem and the changes required for the sx2000:  
The PDH microcontroller is located on each cell PDH daughtercard assembly. It provides  
communication between the management firmware, the PDH space, and the USB bus. The  
microcontroller represents a change from the prior implementation, Intel® 80C251 processes, to  
a more powerful 16-bit microcontroller. This microcontroller change enables the PDH  
daughtercard design to be compatible across all three new CEC platforms. It also enables the  
extra processing power to be used to move the console UARTs into PDH memory space located  
on the cell, eliminating the sx1000 core I/O (CIO) card.  
The UGUY on Superdome contains the PM, the CLU, and the system clock source circuitry.  
The CLU circuitry on the UGUY assembly provides cabinet-level cable interconnect for backplane,  
I/O card cage utility signal communication, and scan support.  
The PM circuitry on the UGUY assembly monitors and controls the 48 V dc, the cabinet  
environment (ambient temperature and fans), and controls power to the entities (cells and I/O  
bays).  
The MP is a single board computer (SBC) that controls the console (local and remote), the front  
panel display and its redirection on the console, maintains logs for the event IDs, coordinates  
messages between devices, and performs other service processor functions.  
The SBCH board provides USB hubs into the cabinet from an upstream hub or the MP.  
22  
Overview  
             
UGUY  
Every cabinet contains one UGUY. See (Figure 1-3). The UGUY plugs into the HUCB. It is not  
hot-swappable. Its MP microprocessor controls power monitor functions, executing the Power  
Monitor 3 (PM3) firmware and the CLU firmware.  
Figure 1-3 UGUY  
CLU Functionality  
The CLU collects and reports the configuration information for itself, the main backplane, I/O  
backplanes, and the SUB/HUB. Each of these boards has a configuration EEPROM containing  
FRU IDs, revision information, and for the main backplane and I/O backplanes, maximum power  
requirements in the fully configured, fully loaded states. These EEPROMs are powered by  
housekeeping power (HKP) and are accessible to SARG from an I2C bus. The power requirement  
information is sent to the PM3 automatically when HKP is applied or when a new entity is  
plugged in. The configuration information is sent to the SUB in response to a get_config  
command.  
The CLU gathers the following information over its five I2C buses:  
Board revision information is contained in the board's configuration EEPROM for the UGUY  
board, the SBCH board, the main backplane, the main backplane power boards (HBPB), the  
I/O backplane (HIOB), and the I/O backplane power boards (IOPB).  
Power requirements from the configuration EEPROM for the main backplane (HLSB or  
HRSB) and the I/O backplanes. This information is sent to the PM3 processor so it can  
calculate cabinet power requirements.  
Power control and status interface. Another function of the UGUY is to use the power_ good  
signals to drive the power on sequence.  
Reset control which includes a reset for each I/O backplane, a main backplane cabinet reset,  
TRST - JTAG reset for all JTAG scan chains in the entire cabinet, a system clock control  
margin control, nominal or high margin and a clock source selection and internal or external  
OL* LED control.  
Status LEDs for the SBA cable OL*, the cell OL*, the I/O backplane OL*, the JTAG scan  
control, the three scan chains per cell, the three scan chains per I/O backplane, and the three  
scan chains on the main backplane.  
PM3 Functionality  
The PM3 performs the following functions:  
Utilities Subsystem  
23  
                 
1. FEPS control and monitoring.  
Superdome has six BPS and the UGUY sends 5V to the BPS for use by the fault collection  
circuitry.  
2. Fan control and monitoring.  
In addition to the blowers, there are five I/O system fans above and between the I/O bays.  
These fans run at full speed all the time. There is no fan speed signal.  
3. Cabinet mode and cabinet number fan out.  
The surface mount dip switch on the HUCB (UGUY backplane) is used to configure a  
Superdome cabinet for normal use or as an SD16 cabinet. Use the 16-position thumb switch  
on the UGUY to set the cabinet number. Numbers 0-7 are for CPU-oriented cabinets and  
numbers 8-15 are for I/O-only cabinets.  
4. Local Power Monitor (LPM) interfaces. Each big board (cell board, I/ O backplane, and main  
backplane) contains logic that controls conversion of 48 V to lower voltages. The PM3  
interfaces to the LPM with the board-present input signal to the PM3 and the power-enable  
output signal from the PM3.  
5. Front and rear panel board control.  
System Clocks  
The sx2000 system clock differs from the sx1000 system clock in that the system clocks are only  
supplied from the backplane and to the backplane crossbar ASICs and the cell boards. There is  
no distribution of the system clocks to the I/O backplanes. Instead, independent local clock  
distribution is provided on the I/O backplane. The system clocks are not provided by the PM3  
on sx2000 servers. The sx2000 system clock source resides on the system backplane.  
Management Processor  
The MP is comprised of two PCBs, the SBC and the SBCH.The MP is a hot-swappable unit  
powered by +5 V HKP that holds the MP configuration parameters in compact flash and the  
error and activity logs and the complex identification information or complex profile in battery  
backed NVRAM. It also provides the USB network controller (MP bus). Each complex has one  
MP per complex. It cannot be set up for redundancy. However, it is not a single point of failure  
for the complex because it can be hot-swapped. If the MP fails, the complex can still boot and  
function. However, the following utility functions are lost until the MP can be replaced:  
Processing and storing log entries (chassis codes)  
Console functions to every partition  
OL* functions  
VFP and system alert notification  
Connection to the MP for maintenance, either locally or remotely  
Diagnostics (ODE and scan)  
24  
Overview  
         
Figure 1-4 Management Processor  
The SBCH provides the physical and electrical interface to the SBC, the fanning out of the USB  
to internal and external subsystems, and a LAN 10/100BT ethernet connection. It plugs into the  
HUCB and is hot-swappable. Every CPU cabinet contains one SBCH board, but only one SBCH  
contains an SBC board used as the MP for the complex. The remaining SBCH boards act as USB  
hubs.  
The SBC board is an embedded computer running system utility board (SUB) firmware. It is the  
core of the MP. It plugs into the SBCH board through a PC104 interface. The SBC provides the  
following external interfaces to the utility subsystem:  
LAN (10/100BT ethernet) for customer console access  
RS232 port for local console access for manufacturing and field support personnel  
The modem function is not included on the SBC and must be external to the cabinet.  
Compact Flash  
The Compact Flash is a PCMCIA-style memory card that plugs into the SBC board. It stores the  
MP firmware and the customer's MP configuration parameters. The parameters stored in the  
compact flash are as follows:  
Network configurations for both the public and private LANs  
User name and password combinations for logging in to the MP  
Baud rates for the serial ports  
Paging parameters for a specified alert level  
Configurable system alert parameters  
HUCB  
The HUCB, shown in Figure 1-5, is the backplane of the utility subsystem. It provides cable  
distribution for all the utility signals except the clocks. It also provides the customer LAN interface  
and serial ports. The support management station (SMS) connects to the HUCB. The system type  
switch is located on the HUCB. This board has no active circuits. It is not hot-swappable.  
Utilities Subsystem  
25  
         
Figure 1-5 HUCB  
Backplane  
The system backplane assembly fabric provides the following functionality in an sx2000 system:  
Interfaces the CLU subsystem to the system backplane and cell modules  
Houses the system crossbar switch fabrics and cell modules  
Provides switch fabric interconnect between multiple cabinets  
Generates system clock sources  
Performs redundant system clock source switching  
Distributes the system clock to crossbar chips and cell modules  
Distributes HKP to cell modules  
Terminates I/O cables to cell modules  
The backplane supports up to eight cells, interconnected by the crossbar links. A sustained total  
bandwidth of 25.5 GB is provided to each cell. Each cell connects to three individual XBC ASICs.  
This connection enables a single chip crossing when a cell communicates with another cell in its  
four-cell group. When transferring data between cells in different groups, two crossbar links  
compensate for the resultant multiple chip crossings. This topology also provides for switch  
fabric redundancy  
Dual rack/backplane systems contain two identical backplanes. These backplanes use 12  
high-speed interface cables as interconnects instead of the flex cable interface previously employed  
for the legacy Superdome crossbar. The sustainable bisection bandwidth between cabinets is 72  
GB/s at a link speed of 2.1 GT/s.  
Crossbar Chip  
The crossbar fabrics in the sx2000 are implemented using the XBC crossbar chip. Each XBC is a  
non-bit-sliced, eight-port non-blocking crossbar that can communicate with the CC or XBC ASICs.  
Each of the eight ports is full duplex, capable of transmitting and receiving independent packets  
simultaneously. Each port consists of 20 channels of IBM's HSS technology. Eighteen channels  
are used for packet data. One channel is used for horizontal link parity, and one channel is a  
spare. The HSS channels can run from 2.0- 3.2 GT/s. At 3.0 GT/s, each port provides 8.5 GB/s of  
sustainable bidirectional data bandwidth.  
Like the CC and the SBA, XBC implements link-level retry to recover from intermittent link  
errors. XBC can also replace a hard-failed channel with the spare channel during the retry process,  
which guarantees continued reliable operation in the event of a broken channel, or single or  
multibit intermittent errors.  
XBC supports enhanced security between hard partitions by providing write protection on key  
CSRs. Without protection, CSRs such as the routing tables can be modified by a rogue OS, causing  
other hard partitions in the system to crash. To prevent this, key CSRs in XBC can only be modified  
by packets with the Secure bit set. This bit is set by the CC, based on a register that is set only by  
26  
Overview  
         
a hard cell reset, which causes secure firmware to be entered. This bit is cleared by secure firmware  
before passing control to an OS.  
Switch Fabrics  
The system backplane houses the switch fabric that connects to each of the cell modules. The  
crossbar switch is implemented by a three-link-per-cell topology: three independent switch  
fabrics connected in parallel. This topology provides switch fabric redundancy in the crossbar  
switch. The backplane crossbar can be extended to an additional crossbar in a second backplane  
for a dual backplane configuration. It connects through a high-speed cable interface to the second  
backplane. This 12-cable high-speed interface replaces the flex cable interface previously used  
on the Superdome system.  
Backplane Monitor and Control  
The backplane implements the following monitor and control functions:.  
Backplane detect and enable functions to and from the CLU  
Backplane LED controls from the CLU  
Backplane JTAG distribution and chains  
Cabinet ID from the CLU  
Reset and power manager FPGA (RPM) and JTAG interface and header for external  
programming  
XBC reset, configuration and control  
IIC bus distribution to and from the CLU  
Clock subsystem monitor and control  
Power supply monitor and control  
Cell detect, power monitor, reset and enable to and from the CLU  
JTAG and USB data distribution to and from each cell module  
Cell ID to each cell module  
OSP FPGA functionality  
I2C Bus Distribution  
The sx2000 system I2C bus extends to the Superdome backplane (SDBP) assembly through a  
cable connected from the CLU subsystem. This cable connects from J17 on the CLU to J64 on the  
SDBP. The clock and data signals on this cable are buffered through I2C bus extenders on the  
CLU and on the backplane.  
The I2C bus is routed to an I2C multiplexer on the backplane where the bus is isolated into four  
bus segments. Three bus segments are dedicated to connections to the three RPMs. The remaining  
segment is used to daisy-chain the remaining addressable devices on the bus. Each bus segment  
is addressed through a port on the I2C multiplexer.  
Clock Subsystem  
The backplane houses two hot-swap oscillator (HSO) modules. Each HSO board generates a  
system clock that feeds into the backplane. Each HSO output is routed to the redundant clock  
source (RCS) module. The RCS module accepts input from the two HSO modules and produces  
a single system clock, which is distributed on the backplane to all cell modules and XBC ASICs.  
System Clock Distribution  
The system components that receive the system clock are the eight cell boards that plug into to  
the backplane and the six XBC on the system backplane. Two backplane clock power detectors  
(one for each 8-way sine clock power splitter) are on the RCS. The backplane power detector sits  
at the end of the clock tree and measures the amplitude of the clock from the RCS to determine  
Backplane  
27  
                 
if it is providing a signal of the correct amplitude to the cell boards and XBCs. Its output is also  
an alarm signal to the RPM FPGA.  
System clocks can originate from these input sources:  
the single-ended external clock input MCX connector  
the 280 MHz margin oscillator on the redundant clock source (RCS) board  
one of the 266.667 MHz oscillators on one of the HSO modules  
The source selection is determined either by firmware or by logic in the RCS.  
The clock source has alarm signals to indicate the following health status conditions to the cabinet  
management subsystem:  
Loss of power and loss of clock for each of the clock oscillator boards  
Loss of clock output to the backplanes  
The sx2000 clock system differs from the sx1000 clock system in that the system clocks are only  
supplied to the backplane crossbar ASICs and the cell boards. System clocks are not distributed  
to the I/O backplanes. Instead, independent local clock distribution is provided on the I/O  
backplane.  
Hot-Swap Oscillator  
Two hot-swappable clock oscillators combine the outputs of both oscillators to form an N+1  
redundant fault tolerant clock source. The resultant clock source drives clocks over connector  
and cable interfaces to the system backplanes.  
The HSO board contains a 266.667 MHz PECL oscillator. The output from this oscillator drives  
a 266.667 MHz band-pass SAW filter that drives a monolithic IC power amplifier. The output of  
the power amplifier is a 266.667 sine wave clock that goes to the RCS. The module also has two  
LEDs, one green and one yellow, that are visible through the module handle.Table 1-1 describes  
the HSO LEDs. The electrical signal that controls the LEDs is driven by the RCS.  
Table 1-1 HSO LED Status Indicator Meaning  
Green LED  
Yellow LED  
Meaning  
On  
Off  
Module OK. HSO is producing a clock of the correct amplitude and frequency  
and is plugged into its connector.  
Off  
Off  
On  
Off  
Module needs attention. HSO is not producing a clock of the correct amplitude  
or frequency, but it is plugged into its connector.  
Module power is off.  
sx2000 RCS Module  
The sx2000 RCS module supplies clocks to the Superdome sx2000 backplane, communicates  
clock alarms to the RPM, and accepts control input from the RPM. It has an I2C EEPROM on the  
module so that the firmware can inventory the module on system power on.  
The RCS supplies 16 copies of the sine wave system clock to the sx2000 system backplane. Eight  
copies go to the eight cell boards, six copies go to the six XBCs on the system backplane, and two  
copies to the backplane clock power detector.  
In normal operation, the RCS selects one of the two HSOs as the source of clocks for the platform.  
The HSO selected depends on whether the HSO is plugged into the backplane and on whether  
it has a valid output level. This selection is overridden if there is a connection from the clock  
input MCX connector on the master backplane. Figure 1-6 shows the locations of the HSOs and  
RCS on the backplane.  
28  
Overview  
         
Figure 1-6 HSO and RCS Locations  
If only one HSO is plugged in and its output is of valid amplitude, then it is selected. If its output  
is valid, then a green LED on the HSO is lit. If its output is not valid, then a yellow LED on the  
HSO lights and an alarm signal goes from the RCS to the RPM. The RCS provides a clock that is  
approximately 100 KHz less than the correct frequency, even if the output of the HSOs are not  
of valid amplitude or no HSOs are plugged in.  
If both HSOs are plugged in and their output amplitudes are valid, then one of the two is selected  
as the clock source by logic on the RCS. The green LEDs on both HSOs light.  
If one of the HSOs outputs does not have the correct amplitude then the RCS uses the other one  
as the source of clocks and sends an alarm signal to the RPM indicating which oscillator failed.  
The green LED lights on the good HSO and the yellow LED lights on the failed HSO.  
If an external clock cable is connected from the master backplane clock output MCX connector  
to the slave backplane clock input MCX connector, then this overrides any firmware clock  
selections. The clock source from the slave backplane becomes the master backplane.  
If firmware selects the margin oscillator as the source of clocks, then it is the source of clocks as  
long as there is no connection to the clock input MCX connector from the master backplane.  
If the firmware selects the external margin clock SMB connectors as the source of clocks, then it  
is the source of clocks as long as no connection exists to the clock input MCX connector from the  
master backplane.  
Cabinet ID  
The backplane receives a 6-bit cabinet ID from the CLU interface J64 connector. The cabinet ID  
is buffered and routed to each RPM and to each cell module slot. The RPM decodes the cabinet  
number from the cabinet ID and uses this bit to alter the cabinet number bit in the ALBID byte  
sent to each XBC through the serial bit stream.  
Cell ID  
The backplane generates a 3-bit slot ID for each cell slot in the backplane. The slot ID and five  
bits from the cabinet ID are passed to each cell module as the cell ID.  
Backplane Power Requirements and Power Distribution  
The dc power supply for the backplane assembly runs from the cabinet power supply subsystem  
through two power cables attached to the backplane. Connectors for the dc supply input have  
the same reference designators and are physically located in the same position as on the  
Superdome system backplane. The power cables are reused cable assemblies from the Superdome  
system and the supply connection is not redundant. One cable is used for housekeeping supply  
input. A second cable is used for 48 V supply input.  
Backplane  
29  
               
The backplane has two slots for power supply modules. The power supply connector for each  
slot has a 1-bit slot address to identify the slot. The address bit for power supply slot 0 is grounded.  
The address bit for slot 1 floats on the backplane. The power supply module provides a pull-up  
resistor on the address line on slot 1. The power supply module uses the slot address bit as bit  
A0 for generating a unique I2C address for the FRU ID prom. Figures 1-7 and 1-8 identify and  
show the location of the backplane power supply modules.  
Figure 1-7 Backplane Power Supply Module  
Each power supply slot has a power supply detect bit that determines if the power supply module  
is inserted into the backplane slot. This bit is routed to an input on the RPMs. The RPM provides  
a pull-up resistor for logic 1 when the power supply module is missing. When the power supply  
module is inserted into the slot, the bit is grounded by the power supply and logic 0 is detected  
by the RPM, indicating that the power supply module is present in the backplane slot.  
Figure 1-8 Backplane (Rear View)  
CPUs and Memories  
The cell provides the processing and memory resources required by each sx2000 system  
configuration. Each cell includes the following components: four processor module sockets, a  
single cell (or coherency) controller ASIC, a high-speed crossbar interface, a high-speed I/O  
interface, eight memory controller ASICs, capacity for up to 32 double-data rate (DDR) DIMMs,  
high-speed clock distribution circuitry, a management subsystem interface, scan (JTAG) circuitry  
for manufacturing test, and a low-voltage DC power interface. Figure 1-9 shows the locations  
of the major components.  
30  
Overview  
         
Figure 1-9 Cell Board  
Cell Controller  
The heart of the cell design is the cell controller. The cell controller provides two front side bus  
(FSB) interfaces, with each FSB connected to two processor modules. The communication  
bandwidth is 6.8 GB/s sustained at 266.67 MHz on each FSB. This bandwidth is shared by the  
two processor modules on the FSB. Interfaces external to the cell provided by the cell controller  
consist of three crossbar links, called the fabric interface, and a remote I/O subsystem link. The  
fabric interface enables multiple cells to communicate with each other across a self-correcting,  
high-speed communication pathway. Sustained crossbar bandwidth is 8.5 GB/s per link at 3.0  
GT/s, or 25.5 GB/s across the three links.  
The remote I/O link provides a self-correcting, high-speed communication pathway between the  
cell and the I/O subsystem through a pair of cables. Sustained I/O bandwidth is 5.5 GB/s for a  
50% inbound and outbound mix, and approximately 4.2 GB/s for a range of mixes. The cell  
controller interfaces to the cells memory system. The memory interface is capable of providing  
a sustained bandwidth of 14 to 16 GB/s at 266.67 MHz to the cell controller.  
Processor Interface  
The cell controller has two separate FSB interfaces. Each of those FSBs is connected to two  
processor sockets in a standard three-drop FSB configuration. The cell controller FSB interface  
is pinned out exactly like that of its predecessor cell controller to preserve past cell routing. The  
cell controller pinout minimizes total routing delay without sacrificing timing skew between the  
FSB address and data and control signals. Such tight routing controls enable the FSB to achieve  
a frequency of 266.67 MHz, and the data to be transmitted on both edges of the interface clock.  
The 128-bit FSB can achieve 533.33 MT/s, thus 8.5 GB/s burst data transfer rate is possible.  
CPUs and Memories  
31  
     
Processors  
There are several Itanium and PA-RISC processor families supported by the processors are  
already installed on the cell board. All processors require that a minimum firmware version be  
installed. For the processors supported, seeTable 1-2.  
Table 1-2 Supported Processors and Minimum Firmware Versions  
Processor Family  
Minimum Firmware Version  
Core Frequency  
4.3e (IPF SFW 004.080.000)  
1.6 GHz  
Intel ® Itanium® single-core processors with 9 MB cache  
Intel ® Itanium® dual-core processors with 18 MB cache  
5.5d (IPF SFW 005.024.000)  
5.5d (IPF SFW 005.024.000)  
1.6 GHz  
1.6 GHz  
Intel ® Itanium® dual-core processors with 24 MB cache  
PA-8900 dual-core processor with 64 MB cache  
PDC_FW 042.009.000  
1.1 GHz  
1.6 GHz  
8.6d (IPF SFW 009.022.000)  
Intel ® Itanium ® dual-core 9100 series processors with 18  
MB cache  
8.6d (IPF SFW 009.022.000)  
1.6 GHz  
Intel ® Itanium ® dual-core 9100 series processors with 24  
MB cache  
Rules for Processor Mixing  
Processor families cannot be mixed on a cell board or within a partition  
Processor frequencies cannot be mixed on a cell board or within a partition  
Cache sizes cannot be mixed on a cell board or within a partition  
Major processor steppings cannot be mixed on a cell board or within a partition  
Full support for Itanium and PA-RISC processors within the same complex but in different  
partitions  
Cell Memory System  
Each cell in the sx2000 system has its own independent memory subsystem. This memory  
subsystem consists of four logical memory subsystems that achieve a combined bandwidth of  
17 GB/s peak, 14-16 GB/s sustained. This cell is the first of the Superdome designs to support the  
use of DDR I/O DRAM. These DIMMs are to be based on the DDR-II protocol, and the cell design  
supports DIMM capacities of 1, 2, or 4 GB using monolithic DRAMs. Nonmonolithic, (stacked)  
DRAMS are not supported on the sx2000. The additional capacitive load and requirement for  
additional chip selects is not accommodated by the new chipset. All DIMMs used in the sx2000  
are compatible with those used in other new CEC platforms. However other platforms can  
support DIMMs based on nonmonolithic DRAMs that are incompatible with the sx2000. Cell  
memory is illustrated in Figure 1-10.  
32  
Overview  
       
Figure 1-10 Cell Memory  
DIMMs are named according to both physical location and loading order. The physical location  
is used for connectivity on the board and is the same for all quads. Physical location is a letter  
(A or B) followed by a number (0, 1, 2, or 3). The letter indicates which side of the quad the DIMM  
is on. A is the left side, or the side nearest CC. The DIMMs are then numbered 0 through 3,  
starting at the outer DIMM and moving inwards the memory controllers.  
Memory Controller  
The memory controller CEC's primary function is to source address and control signals and  
multiplex and demultiplex data between the CC and the devices on the DDR DIMMs. Four  
independent memory blocks, consisting of two memory controllers and eight DIMMs, are  
supported by interface buses running between the CC and the memory controller. The memory  
controller converts these link streams to the correct signaling voltage levels (1.8 V) and timing  
for DDR2 protocol.  
Bandwidth is limited by the memory interface buses that transfer data between the CC and the  
memory controller. The memory controller also performs the write (tag update) portion of a  
read-modify-write (RMW) access. The memory controller is bit sliced, and two controllers are  
required to form one 72-bit CC memory interface data (MID) bus. The CC MID buses are  
bidirectional, source synchronous, and run at 533.33 MT/s. The memory side of a pair of memory  
controller ASICs consists of two 144-bit bidirectional DDR2 SDRAM data buses operating at  
533.33 MT/s. Each bus supports up to four echelons of DRAMs.  
DIMM Architecture  
The fundamental building block of the DIMM is a DDR2 DRAM with a 4-bit data width. Each  
DIMM transfers 72 bits of data on a read/write, and the data is double-clocked at a clock frequency  
of 266.67 MHz for an effective peak transfer rate of 533.33 MT/s. Each DIMM includes 36 DRAM  
devices for data storage and two identical custom address buffers. These buffers fan out and  
check the parity of address and control signals received from the memory controller. The DIMM  
densities for the sx2000 are 1 GB (256 Gb DRAMs), 2 GB (512 Gb DRAMs), and 4 GB (1 Gb  
DRAMs). The new sx2000 chipset DIMMs have the same mechanical form factor as the DIMMs  
used in Integrity systems, but the DIMM and the connector, are keyed differently from previous  
DIMM designs to prevent improper installation. The DIMM is roughly twice the height of an  
industry-standard DIMM. This height increase enables the DIMM to accommodate twice as  
many DRAMs as an industry-standard DIMM and provides redundant address and control  
signal contacts not available on industry-standard DDR2 DIMMs.  
Memory Interconnect  
MID bus data is transmitted through the four 72-bit, ECC-protected MID buses, each with a clock  
frequency equal to the CC core frequency. The data is transmitted on both edges of the clock, so  
the data transfer rate (533 MT/s) of each MID is twice the MID clock frequency (267 MHz). A  
CPUs and Memories  
33  
         
configuration of at least eight DIMMs (two in each quadrant) activates all four MID buses. The  
theoretical bandwidth of the memory subsystem can be calculated as follows: (533 MT/s * 8  
Bytes/T * 4) = 17 GB/s The MID buses are bit-sliced across two memory controllers with 36-bits  
of data going to each memory controller. In turn, each memory controller takes that high-speed  
data (533 MT/s) from the MID, and combines four consecutive MID transfers to form one 144-bit  
DRAM bus. This DRAM bus is routed out in two 72-bit buses to two DIMM sets, which include  
four DIMMs each. The DDR DRAM bus runs at 267 MT/s and data is clocked on both edges of  
the clock.  
The DDR DRAM address and control (MIA) signals for each quadrant originate at the CC and  
are routed to the DIMMs through the memory controller. On previous systems, these signals  
did not touch the memory chips; they were routed to the DIMMs through fan out buffers. The  
DRAM address and control signals are protected by parity so that signaling errors are detected  
and do not cause silent data corruption. The MIA bus, comprised of the SDRAM address and  
control signals, is checked for parity by the memory controller. Each of the 32 DIMMs can  
generating a unique parity error signal that is routed to one of four parity error inputs per memory  
controller. Each memory controller then logically gates the DIMM parity error signals it receives  
with its own internal parity checks for the MIC and MIT buses. This logical gating results in a  
single parity error output that is driven to the CC and latched as an event in an internal  
memory-mapped register.  
Eight unique buses for command and control signals are transmitted from the CC to each memory  
controller simultaneously with the appropriate MID bus interconnect. Each MIC bus includes  
four signals running at 533 MT/s. Each command on the MIC bus takes four cycles to transmit  
and is protected by parity so that signaling errors are detected and do not cause silent data  
corruption.  
Four MIT buses are routed between the CC and the designated tag memory controllers. MIT  
buses run at 533 MT/s and use the same link type as the MID buses. Each MIT bus includes six  
signals and a differential strobe pair for deskewing. As with the MIA and MIC buses, the MIT  
is protected by parity so that signaling errors are detected and do not cause silent data corruption.  
Mixing Different Sized DIMMs  
Mixing different sized DIMMs is allowed, provided you follow these rules:  
An echelon of DIMMs consists of two DIMMs of the same type.  
All supported DIMM sizes can be present on a single cell board at the same time, provided  
previous rule is satisfied.  
Memory must be added in one echelon increments.  
The amount of memory contained in an interleaved group must be 2n bytes.  
Memory Interleaving  
Memory is interleaved in the following ways on sx2000 systems:  
MBAT (across DIMMs)  
Cell map (across cells)  
Link (across fabrics)  
Memory Bank Attribute Table  
The memory bank attribute table (MBAT) interleaving is done on a per-cell basis before the  
partition is rendezvoused. The cell map and fabric interleaving are done after the partition has  
rendezvoused. SDRAM on the cell board is installed in physical units (echelons). The sx2000 has  
16 independent echelons. Each echelon consists of two DDR DIMMs. Each rank can have multiple  
internal logical units called banks, and each bank contains multiple rows and columns of memory.  
An interleaving algorithm determines how a rank, bank, row, or column address is formed for  
a particular physical address.  
34  
Overview  
       
The 16 echelons in the memory subsystem can be subdivided into four independent memory  
quadrants accessed by four independent MID buses. Each quadrant contains two independent  
SDRAM buses. Four echelons can be installed on each SDRAM bus. The CC contains four MBATs,  
one for each memory quadrant. Each MBAT contains eight sets of routing CSRs (one per rank).  
Each routing CSR specifies the bits of the address that are masked or compared to select the  
corresponding rank, referred to as interleave bits. The routing CSR also specifies how the  
remaining address bits are routed to bank, row, and column address bits.  
To optimize bandwidth, consecutive memory accesses target echelons that are as far from each  
other as possible. For this reason, the interleaving algorithm programs the MBATs so that  
consecutive addresses target echelons in an order that skips first across quadrants, then across  
SDRAM buses, then across echelons per SDRAM bus, then across banks per rank.  
Cell Map  
Cell mapping creates a scheme that is easy to implement in hardware. It enables easy calculation  
of the interleaving parameters for software. In order to do this part of the physical address to  
perform a lookup into a table, it gives the actual physical cell and ways of interleaving into  
memory at this address. Be aware of the following:  
A portion of memory that is being interleaved across must start at an offset that is a multiple  
of the memory chunk for that entry. For example, to interleave across 16 GB of memory with  
one entry, the starting address for this chunk must be 0 GB, 16 GB, 32 GB, 48 GB, or 64 GB.  
If using three 2 GB entries to interleave across three cells, then the multiple must be 2 GB,  
not 6 GB.  
Interleaving is performed across the actual cells within the system. Interleaving can be done  
across a minimum of 0.5 GB on a cell, and a maximum interleave across 256 GB per cell.  
Each cell in an interleave group must have the same amount of memory interleaved. That  
is, you cannot interleave 2 GB in one cell and 4 GB in another cell.  
The cell map remains the same size as in previous HP Integrity CECs.  
Link Interleaving  
The link interleaving functionality did not exist in sx1000. This logic is new for the sx2000 CC.  
The sx2000 enables cells to be connected through multiple paths. In particular, each CC chip has  
three crossbar links. When one CC sends a packet to another CC, it must specify which link to  
use.  
The CC is the sx2000 chipset cell controller. It interfaces to processors, main memory, the crossbar  
fabric, an I/O subsystem, and processor-dependent hardware (PDH). Two data path CPU bus  
interfaces are implemented, with support for up to four processors on each bus. The CC supports  
bus speeds of 200 MHz and 267 MHz. The 128-bit data bus is source synchronous, and data can  
be transferred at twice the bus frequency: 400 MT/s or 533 MT/s. The address bus is 50 bits wide,  
but only 44 bits are used by the CC. Error correction is provided on the data bus and parity  
protection is provided on the address bus.  
Memory Error Protection  
All of the CC cache lines are protected in memory by an error correction code (ECC). The sx2000  
memory ECC scheme is significantly different from the sx1000 memory ECC scheme. An ECC  
code word is 288 bits long: 264 bits of payload (data and tag) and 24 bits of redundancy. An ECC  
code word is contained in each pair of 144-bit chunks. The first chunk in the pair (for example  
chunk 0 in the 0,1 pair) contains all the even nibbles of the payload and redundancy, and the  
second chunk contains all the odd nibbles. The memory data path (MDP) block checks for, and  
if necessary, corrects any correctable errors.  
CPUs and Memories  
35  
         
DRAM Erasure  
A common cause of a correctable memory error is a DRAM failure; the ability to correct this type  
of memory failure in hardware is called chip kill. Address or control bit failure is a common  
cause. Chip kill ECC schemes have added hardware logic that enables them to detect and correct  
more than a single-bit error when the hardware is programmed to do so. A common  
implementation of traditional chip kill is to scatter data bits from each DRAM component across  
multiple ECC code words, so that only one bit from each DRAM is used per ECC code word.  
Double chip kill is an extension to memory chip kill that enables the system to correct multiple  
ECC errors in an ECC code word. Double chip kill is also known as DRAM erasure.  
DRAM erasure is invoked when the number of correctable memory errors exceeds a threshold.  
It can be invoked on a memory subsystem, bus, rank or bank. PDC tracks the errors on the  
memory subsystem, bus, rank and bank in addition to the error information it tracks in the PDT.  
PDC Functional Changes  
There are three primary threads of control in the processor dependent code (PDC): the bootstrap,  
the errors code, and the PDC procedures. The bootstrap is the primary thread of control until  
the OS is launched. The boot console handler (BCH) acts as a user interface for the bootstrap,  
but can also be used to diagnose problems with the system. The BCH can call the PDC procedures  
but this explicit capability is only available in MFG mode through the Debug menu.  
The PDC procedures are the primary thread of control once the OS launches. Once the OS  
launches, the PDC code is only active when the OS calls a PDC procedure or there is an error  
that calls the error code. Normally, the error thread of control returns control back to the OS  
through OS_HPMC, OS_TOC or RFI (LPMC or CMCI). In some cases, the HPMC or MCA handler  
halts the cell or partition.  
If a correctable memory error occurs during run time, the new chipset logs the error and corrects  
it in memory (reactive scrubbing). Diagnostics periodically call PDC_PAT_MEM (Read Memory  
Module State Info) to read the errors logs. When this PDC call is made, system firmware updates  
the PDT, and deletes entries older than 24 hours in the structure that counts how many errors  
have occurred for each memory subsystem, bus, rank or bank. When the counts exceed the  
thresholds, PDC invokes DRAM erasure on the appropriate memory subsystem, bus, rank or  
bank. Invoking DRAM erasure does not interrupt the operation of the OS.  
When PDC invokes DRAM erasure, the information returned by PDC_PAT_MEM (Read Memory  
Module State Info) indicates the scope of the invocation and provides information to enable  
diagnostics to determine why it was invoked. PDC also sends IPMI events indicating that DRAM  
erasure is in use. When PDC invokes DRAM erasure, the correctable errors that caused DRAM  
erasure are removed from the PDT. Because invoking DRAM erasure increases the latency of  
memory accesses and reduces the ability of ECC to detect multibit errors, you must notify the  
customer that the memory subsystem must be serviced. HP recommends that the memory  
subsystem be serviced within a month of invoking DRAM erasure on a customer machine.  
The thresholds for invoking DRAM erasure are incremental, so that PDC invokes DRAM erasure  
on the smallest part of memory subsystem necessary to protect the system against another bit  
error.  
Platform Dependent Hardware  
The platform dependent hardware (PDH) includes functionality that is required by both system  
and management firmware. The PDH provides the following features:  
An interface that passes multiple forms of information between system firmware and the  
MP on the SBC by the platform dependent hardware controller (PDHC, on the PDH daughter  
card).  
Flash EPROM for PDHC boot code storage.  
PDHC SRAM for operational instruction and data storage.  
36  
Overview  
         
Memory mapped control and status registers (CSRs) control the cell for management needs.  
System management bus (SMBus) reads the processor module information EEPROM, scratch  
EEPROM, and thermal sensing device.  
I2C bus reads PDH, cell, and cell power board FRU ID information.  
Serial presence detect (SPD) bus detects and investigates loaded DIMMs.  
Timing control of cell reset signals.  
Logic analyzer ports for access to important PDH signals.  
PDH resources accessible by the processors (system firmware) and the management  
subsystem.  
Flash EPROM for system firmware bootstrap code storage and update capability.  
System firmware scratch pad SRAM for operation instruction and data storage.  
Battery backed NVRAM and real time clock (RTC) chip to provide wall clock time.  
Memory-mapped registers for configuration related information  
Console UARTs (moved from I/O space).  
Low level debug and general purpose debug ports (UART).  
Trusted platform monitor (TPM).  
Reset  
The sequencing and timing of reset signals is controlled by the LPM, a field-programmable gate  
array (FPGA) that resides on the cell. The LPM is powered by the housekeeping rail and has a  
clock input from the PDH daughter card that runs continuously at 8 MHz. This enables the LPM  
and the rest of the utility subsystem interface to operate regardless of the power state of the cell.  
Cell reset can be initiated from the following sources:  
Power enable of the cell (initial power-on)  
Backplane reset causes installed cells to reset, or cell reset initiated from PDHC in direct  
response to an MP command or during a system firmware update  
System firmware-controlled soft reset initiated by writing into the PDH interface chip test  
and reset register  
The LPM contains a large timer that gates all the reset signals and ensures the proper signaling  
sequence regardless of the source of that reset event. The most obvious reset sequencing event  
is the enabling of power to the cell, but the sequencing of the reset signals is consistent even if  
the source of that reset is an MP command reset for the main backplane, a partition, or the cell  
itself.  
Cell OL*  
For online add (OLA) of a cell, the CC goes through the normal power on reset sequence. For  
online delete (OLD) of a cell, software cleans up to the I/O (SBA) interface to put it in reset mode  
and hold it there. When the I/O (SBA) link is held in reset, the cell is ready; power can be turned  
off and the cell can be removed.  
I/O Subsystem  
The SIOBP is an update of the GXIOB, with a new set of chips that increase the boards internal  
bandwidth and support the newer PCI-X 2.0 protocol. The SIOBP uses most of the same  
mechanical parts as the GXIOB. The connections between the I/O chassis and the rest of the  
system have changed. The cell board to I/O backplane links are now multichannel, high-speed  
serial (HSS) based rather than a parallel interface. Because of this, the SIOBP can only be paired  
with the sx2000 cell board and is not backward compatible with earlier Superdome cell boards.  
The term PCI-X I/O chassis refers to the assembly containing an SIOBP. All slots are capable of  
supporting both PCI and PCI-X cards.  
I/O Subsystem  
37  
         
A new concept for the sx2000 is a fat rope. A fat rope is logically one rope that has 32 wires. It  
consists of two single ropes but has the four command wires in the second single rope removed.  
The concept of a single rope remains unchanged. It has 18 signals, of which 10 are bidirectional,  
single-ended address and data bits. Two pairs of unidirectional, single-ended lines carry  
commands in each direction and a differential strobe pair for each direction. These are all enhanced  
ropes, which support double the bandwidth of plain ropes and additional protocol behavior.  
Ropes transfer source-synchronous data on both edges of the clock and can run at either of two  
speeds.  
The major components in the I/O chassis are the system bus adapter (SBA) ASIC and 12 logical  
bus adapter (LBA) ASICs. The high speed serial (HSS) links (one inbound and one outbound)  
are a group of 20 high-speed serial differential connections using a cable that enables the I/O  
chassis to be located as much as 14 feet away from the cell board. This enables the use of an I/O  
expansion cabinet to provide more I/O slots than fit in the main system cabinet.  
Enhanced ropes are fast, narrow links that connect singly or in pairs between the SBA and four  
specific LBAs. Fat ropes are enhanced dual-width ropes that are treated logically as a single rope.  
A fat rope can connect to an LBA. Dual fat ropes can connect to an LBA.  
A PCI-X I/O chassis consist of four printed circuit assemblies (the PCI-X I/O backplane, the PCI-X  
I/O power board, the PCI-X I/O power transfer board, and the doorbell board) plus the necessary  
mechanical components required to support 12 PCI card slots.  
The master I/O backplane (HMIOB) provides easy connectivity for the I/O chassis. The HSS link  
and utilities signals come through the master I/O backplane. Most of the utilities signals travel  
between the UGUY and the I/O backplane, with a few passing through to the I/O power board.  
The I/O power board contains all the power converters that produce the various voltages needed  
on the I/O backplane. Both the I/O backplane and the I/O power board have FRU EEPROMs. An  
I/O power transfer board provides the electrical connections for power and utility signals between  
the I/O backplane and I/O power board.  
PCI-X Backplane Functionality  
The majority of the functionality of a PCI-X I/O backplane is provided by a single SBA ASIC and  
twelve LBA ASICs (one per PCI slot). A dual-slot hot-plug controller chip plus related logic is  
also associated with each pair of PCI slots. The SBA is the primary I/O component. Upstream,  
the SBA communicates directly with the cell controller CC ASIC of the host cell board through  
a high-bandwidth logical connection (HSS link). Downstream, the SBA spawns 16 logical ropes  
that communicate with the LBA PCI interface chips. Each PCI chip produces a single 64-bit PCI-X  
bus supporting a single PCI or PCI-X add-in card. The SBA and the CC are components of the  
sx2000 and are not compatible with the legacy or Integrity CECs.  
The newer design for the LBA PCI chip replaces the previous design for LBA chip providing  
PCI-X 2.0 features. Link signals are routed directly from one of the system connector groups to  
the SBA. The 16 ropes generated by the SBA are routed to the LBA chips as follows:  
The four LBAs are tied to the SBA by single-rope connections and are capable of peak data  
rates of 533 MB/s (equivalent to the peak bandwidth of PCI 4x or PCIX-66).  
LBAs are tied to the SBA by either a single fat or dual-rope connections and are capable of  
peak data rates of 1.06 GB/s (equivalent to the peak bandwidth of PCIX-133). Two LBAs use  
dual-fat rope connections and are capable of peak data rates of 2.12 GB/s (equivalent to the  
peak bandwidth of PCIX-266).  
Internally, the SBA is divided into two halves, each supporting four single ropes and four fat  
ropes. The I/O backplane routing interconnects the ASICs in order to balance the I/O load on  
each half of the SBA.  
SBA Chip CC-to-Ropes  
The SBA chip communicates with the CC on the cell board through a pair of high-speed serial  
unidirectional links (HSS or e-Links). Each unidirectional e-Link consists of 20 serial 8b/10b  
38  
Overview  
     
encoded differential data bits operating at 2.36 GT/s. This yields a peak total bidirectional HSS  
link bandwidth of 8.5 GB/s. Internally, SBA routes this high-speed data to and from one of two  
rope units. Each rope unit spawns four single ropes and four fat ropes. A maximum of two like  
ropes can connect to an LBA. This means that the SBA to LBA rope configurations can be single,  
dual, or fat ropes and the SBA-to-LBA rope configurations can be single, dual, fat or dual fat  
ropes.  
In a default configuration, ropes operate with a 133 MHz clock and so have 266 MT/s for a peak  
bandwidth (266 MB/s per single rope). In the enhanced configuration, ropes operate with a 266  
MHz clock and so have 533 MT/s for a peak bandwidth 533 MB/s per single rope. On the SIOBP,  
firmware is expected to always configure the 266 MHz enhanced ropes.  
Ropes can be connected to an LBA either individually or in pairs. A single rope can sustain up  
to PCI 4x data rates (full bandwidth support for a 64-bit PCI card at 33 or 66 MHz or for a 64-bit  
PCI-X card at 66 MHz for a 32-bit PCI-X card at 133 MHz). A dual rope or fat rope can sustain  
PCI 8x data rates (64-bit PCI-X card at 133 MHz). A dual fat rope can sustain PCI 16x data rates  
(64-bit PCI-X card at 266 MHz). Because of the internal architecture of the SBA, when two ropes  
are combined, they must be adjacent even/odd pairs. Ropes 0 and 1 can be combined, but not 1  
and 2. The two paired ropes must also be of the same type, either single or fat.  
The location of the ropes on the SBA chip determines the rope mapping to PCI slots on the I/O  
Figure 1-11 PCI-X I/O Rope Mapping  
Ropes-to-PCI LBA Chip  
The LBA ASIC interfaces between the ropes and the PCI bus. The primary enhancement to the  
LBA ASIC is support of PCI-X 2.0 266 MHz bus operation. The extra bandwidth requirements  
of the higher speed PCI-X bus are met by widening the ropes interface to accept single, dual, fat,  
or dual fat ropes. Another LBA enhancement is selectable ECC protection on the data bus.  
The SIOBP board has six LBAs configured with either dual ropes or a fat rope. This provides  
enough bandwidth for PCI-X 133 MHz 64-bit or less operation. Two LBA chips are configured  
with dual fat ropes (slots 5 and 6) that provide enough bandwidth to support PCI-X 2.0 running  
at 266 MHz 64-bit or less. Each LBA is capable of only 3.3 V or 1.5 V signaling on the PCI bus.  
I/O Subsystem  
39  
       
Cards that allow only 5 V signaling are not supported; PCI connector keying prevents insertion  
of such cards.  
Each LBA has control and monitor signals for use with a PCI hot-swap chip. It also converts PCI  
interrupts into interrupt transactions which are fed back to the CPUs.  
PCI Slots  
For maximum performance and availability, each PCI slot is sourced by its own LBA chip and  
is supported by its own portion of a hot-plug controller. All slots are designed to Revision 2.2 of  
the PCI specification and Revision 2.0a of the PCI-X specification and can support full size 64–bit  
cards with the exceptions noted below. Shorter or smaller cards are also supported, as are 32-bit  
cards. Slot 0 support for the core I/O card is removed on the SIOBP.  
SIOBP PCI slot support of VAUX3.3 and PME is not be supported. SMBus is supported in  
hardware through two I2C Muxes. Firmware can configure the muxes to enable communication  
to any of the 12 PCI slots. JTAG is not supported for PCI slots.  
Each device on a PCI bus is assigned a physical device number. On the past HIOB, the slot was  
configured as device 0. However, the PCI-X specification requires that the host bridge to be  
device 0. So for SIOBP the slot is configured as device 1.  
The SIOBP's ten outermost slots support only 3.3 V signaling (PCI or PCI-X Mode 1). The two  
innermost slots support either 3.3 V or 1.5 V (PCI-X Mode 2) signaling. All SIOBP PCI connectors  
physically prevent 5 V signaling cards from being installed.  
Mixed PCI-X and PCI Express I/O Chassis  
The 12-slot mixed PCI-X/PCI Express (PCIe) I/O chassis was introduced for the sx2000 Superdome  
with the two new dual-core Intel ® Itanium® processors and is heavily leveraged from the 12-slot  
PCI-X I/O chassis. The primary change replaces six of the LBAs with a new LBA ASIC to provide  
six PCI Express 1.1 compliant slots. The PCI-X/PCIe I/O chassis is only supported for Intel ®  
Itanium® dual-core processors.  
The new LBA provides an 8-lane (x8) Root Port compliant with the PCIe 1.1 specification. The  
six corresponding slots are compatible with PCIe cards with x8 or smaller edge connectors. PCIe  
slots are not compatible with PCI or PCI-X cards. Physical keying prevents installation of PCI  
or PCI-X cards into PCIe slots, or PCIe cards into PCI-X slots.  
The new PCIe I/O backplane board is a respin of the SIOBP3, with six of the LBA ASICs replaced  
with new PCIe LBA ASICs. These new LBA ASICs populate slots 2, 3, 4, 5, 6, and 7. All other  
slots contain PCI LBA ASICs. Slot 2 is a dual-thin rope; slots 3, 4, and 7 are fat-ropes and slots 5  
and 6 are dual-fat ropes. All slots are hot-pluggable (Figure 1-12 “PCIe I/O Rope Mapping”).  
The new AIOBP I/O backplane uses most of the same mechanical components as the SIOBP. The  
differences are the PCIe connector, and the card extractor hardware.  
A PCI-X I/O chassis consists of four printed circuit assemblies; the PCI-X I/O backplane, the  
PCI-X I/O power board, the PCI-X I/O power transfer board, and the doorbell board plus the  
necessary mechanical components required to support 12 PCI card slots.  
40  
Overview  
   
Figure 1-12 PCIe I/O Rope Mapping  
PCI Hot-Swap Support  
All 12 slots support PCI hot-plug permitting OLA and OLD of individual I/O cards without  
impacting the operation of other cards or requiring system downtime. Card slots are physically  
isolated from each other by nonconductive card separators that also serve as card ejectors to aid  
in I/O card removal. A pair of light pipes attached to each separator conveys the status of the  
slot power (green) and attention (yellow/amber) LEDs, clearly associating the indicators with  
appropriate slots. An attention button (doorbell) and a manual retention latch (MRL) is associated  
with each slot to support the initiation of hot-plug operations from the I/O chassis.  
The core I/O provided a base set of I/O functions required by Superdome protection domains.  
In past Superdomes, PCI slot 0 of the I/O backplane provided a secondary edge connector to  
support a core I/O card. In the sx2000 chipset, the core I/O function is moved onto the PDH card,  
and the extra core I/O sideband connector is removed from the SIOBP board.  
System Management Station  
The Support Management Station (SMS) provides support, management and diagnostic tools  
for field support. This station combines software applications from several organizations within  
HP onto a single platform with the intent of helping field support reduce MTTR. Applications  
running on the SMS include tools to collect and analyze system log information, analyze and  
decode crash dump data, perform scan diagnostics, and provide configuration rules and  
recommendations for the CE's. The SMS also acts as an FTP server for the PDC, Itanium, and  
manageability firmware files needed to perform firmware updates on the systems. The SMS is  
also host to the Partition Manager Command-Line Interface tool used for partitioning the sx1000  
and sx2000 platforms. The SMS software runs on both a Windows-based PC and an HP-UX  
workstation (Table 1-3 “SMS Lifecycles”). The SMS supports both HP Superdome Integrity and  
PA-RISC systems. By default, customer orders specify the PC SMS for new systems. Support for  
sx1000 and sx2000 systems is provided for the HP-UX workstations currently in the field. New  
customers will purchase a Windows-based HP Series rp5700 PC. The support provided on the  
prior generation of SMS is equivalent to that available for Superdome but does not include new  
capabilities developed for the Windows environment. Each customer site containing a Superdome  
system must have at least one SMS . The SMSs must have an Ethernet connection to the  
System Management Station  
41  
     
management LAN of each system MP on which it is used. If possible, locate the SMS close to the  
system being tested so field support has convenient access to both machines.  
Table 1-3 SMS Lifecycles  
Superdome  
SMS  
Console  
Supported PC/workstation (e.g. B2600)  
TFT5600  
Legacy prior to April 2004  
Legacy after April 2004  
rp2470  
UNIX SMS: rx2600  
HP-UX 11i v2 ONLY  
Legacy  
Windows SMS: ProLiant ML350 G4P, TFT5600 & Ethernet switch  
upgraded to sx1000 and sx2000  
Windows 2000 Server SP4  
Any HP-UX 11.0 or later SMS with  
software upgrade  
Existing console device  
New sx1000 and sx2000  
Windows SMS: ProLiant ML350 G5, TFT7600 & Ethernet switch  
Windows Server 2000  
UNIX SMS: rx2620  
TFT7600  
TFT7600  
HP-UX 11i v2 ONLY  
sx1000 and sx2000 beginning  
September 2009  
Windows HP PC SMS  
User Accounts  
Two standard user accounts are created on the SMS. The first account user name is root and it  
uses the standard root password for Superdome SMS stations. This account has administrative  
access. The second account user name is hduser and it uses the standard hduser password for  
the Superdome SMS stations. This account has general user permissions.  
New Server Cabling  
Three new Superdome cables designed for the sx2000 improve data rate and electrical  
performance:  
an m-Link cable  
two types (lengths) of e-Link cable  
a clock cable  
m-Link Cable  
The m-Link cable (A9834-2002A) is the primary backplane to second cabinet backplane high  
speed interconnect. The m-Link cable connects XBCs between system and I/O backplanes. The  
cable uses 4x10 HMZD connectors with Amphenol Spectra-Strip 26AWG twin-ax cable material.  
The m-Link cable is designed with one length but it is used in several connecting points. Thus,  
you must manage excessive cable length carefully. The ideal routing keeps m-Link cables from  
blocking access of power and XBC modules. Twelve high-speed cables must be routed around  
the backplane frame with the support of mechanical retentions. The m-Link cable is designed  
with a more robust dielectric material than the legacy REO cable and can withstand a tighter  
bend radius. However, HP recommends keeping the minimum bend radius at 2 inches.  
e-Link Cable  
The e-Link cables (A9834-2000B) are seven feet long cables and the external e-Link cable  
(A9834-2001A) is 14 feet long. Both use 2-mm HM connectors with Gore 26AWG PTFE twin-ax  
cable material. The e-Link cable connects the cell to the local I/O chassis, and the external e-Link  
42  
Overview  
           
cable connects the cells to a remote PCI-X chassis. Because both the e-Link and the external e-Link  
use the same cable material as the legacy REO cable, cable routing and management of these  
cables in sx2000 system remain unchanged relative to Superdome. The external e-Link cable  
requires a bend radius no smaller than two inches. The e-Link cable requires a bend radius no  
smaller than four inches. Figure 1-13 illustrates an e-Link cable.  
Figure 1-13 e-Link Cable  
During system installation, two internal e-Link or two external e-Link cables are needed for each  
cell board and I/O backplane. Twelve m-Link cables are needed for each dual-cabinet  
configuration.  
New Server Cabling  
43  
 
Figure 1-14 Backplane Cables  
Clock Cable  
The clock distribution to a second cabinet for the sx2000 requires a new cable (A9834-2003A).  
Firmware  
The newer Intel® Itanium® Processor firmware consists of many components loosely coupled  
by a single framework. These components are individually linked binary images that are bound  
together at run time. Internally, the firmware employs a software database called a device tree  
to represent the structure of the hardware platform and to provide a means of associating software  
elements with hardware functionality.  
Itanium or PA-RISC firmware releases for HP Integrity Superdome/sx2000 or HP 9000/sx2000  
are available.  
Itanium Firmware for HP Integrity Superdome/sx2000  
The Itanium firmware incorporates the following firmware interfaces:  
44  
Overview  
           
Figure 1-15 Itanium Firmware Interfaces  
Processor Abstraction Layer (PAL) provides a seamless firmware abstraction between the  
processor, the system software, and the platform firmware.  
System Abstraction Layer (SAL) provides a uniform firmware interface and initializes and  
configures the platform.  
Extensible Firmware Interface (EFI) provides an interface between the OS and the platform  
firmware.  
Advanced Configuration and Power Interface (ACPI) provides a new standard environment  
for configuring and managing server systems. It moves system configuration and  
management from the BIOS to the operating system and abstracts the interface between the  
platform hardware and the OS software, thereby enabling each to evolve independently of  
the other.  
The firmware supports HP-UX 11i version 2, Linux, Windows, and OpenVMS through the  
Itanium® processor family standards and extensions. It includes no operating system-specific  
functionality. Every OS is presented the same interface to system firmware, and all features are  
available to each OS.  
NOTE: Windows Server 2003 Datacenter does not support the latest ACPI specification (2.0).  
The firmware must provide legacy (1.0b) ACPI tables.  
Using the acpiconfigcommand, the ACPI tables presented to the OS are different. The firmware  
implements the standard Intel® Itanium® Processor family interfaces with some  
implementation-specific enhancements that the OS can use but is not required to use, such as  
page deallocation table reporting, through enhanced SAL_GET_STATE_INFO behavior.  
User Interface  
The Intel® Itanium® processor family firmware employs a user interface called the Pre-OS  
system startup environment (POSSE). The POSSE shell is based on the EFI shell. Several commands  
were added to the EFI shell to support HP value-added functionality. The new commands  
encompass functionality similar to BCH commands on PA-RISC systems. However, the POSSE  
shell is not designed to encompass all BCH functionality. They are separate interfaces.  
Error and Event IDs  
The new system firmware generates event IDs, similar to chassis codes, for errors, events, and  
forward progress to the MP through common shared memory. The MP interprets, stores, and  
reflects these event IDs back to running partitions. This helps in the troubleshooting process.  
Firmware  
45  
 
The following seven firmware packages installed in the sx2000 to support the IPMI manageability  
environment:  
Management Processor (h_mp.xxx.xxx.xxx.frm)  
Power Monitor (h_pm.xxx.xxx.xxx.frm)  
Cabinet-Level Utilities (h_clu.xxx.xxx.xxx.frm)  
Cell (h_cell_pdh.xxx.xxx.xxx.frm)  
Processor-Dependent Hardware Code (h_pdhc.xxx.xxx.xxx.frm)  
Event Dictionary (h_ed.xxx.xxx.xxx.frm)  
Intel® Itanium® Processor Family Firmware (ipf.x.xx.frm)  
For the latest Superdome sx2000 firmware levels, check your Engineering Advisories.  
To update firmware on the Superdome sx2000, use the fwcommand from the MP Main Menu.  
MP MAIN MENU:  
CO: Consoles  
VFP: Virtual Front Panel  
CM: Command Menu  
CL: Console Logs  
SL: Show Event Logs  
FW: Firmware Update  
HE: Help  
X: Exit Connection  
Itanium System Firmware Functions  
Support for HP-UX, Windows (Enterprise Server and Data Center), Linux, and OpenVMS  
Support for EFI 1.10.14.61 and EFI 1.1 I/O drivers  
Support for ACPI 1.0b up through 2.0c (OS-dependent)  
Parallel main memory initialization. Support for double chip-spare in memory ECC code  
OLAD of new cells with noninterleaved memory, OLAD I/O cards  
Support for link level retry with self-healing for crossbar and I/O links  
Support for both native and EBC EFI I/O card drivers  
Maximum 128 CPU cores per partition (8 CPU cores per cell)  
Supports mixing Itanium cells of different frequencies or major steps (generations) in separate  
partitions within a complex  
Supports mixing of Itanium and PA-RISC processors in the same complex but in different  
partitions.  
Support for 1, 2, and 4 GB DDR-II DIMMs  
Support for mixed DIMMs on a cell  
Support for common DIMMs across sx2000 platforms  
Supports nonuniform memory configurations within a partition  
Address parity checking on DIMMs (no address ECC)  
Support for cell local memory  
Support for adding DIMMs in increments of eight  
Support for new LBA and PCI-X 2.0 (266 MHz) (PCI compatible)  
Support for all PCI-X and PCI cards supported by respective sx1000 systems  
Elimination of Superdome core I/O card for Superdome/sx2000 console  
Infiniband supported using PCI-X cards only  
Support for shadowed system firmware flash  
PA-RISC Firmware for HP 9000/sx2000 Servers  
The PA-RISC firmware incorporates firmware interfaces shown in Figure 1–18.  
46  
Overview  
     
Figure 1-16 PA-RISC Firmware Interfaces  
PA-RISC System Firmware Functions  
Supports only HP-UX  
Supports mixing of PA-RISC and Itanium cell boards in the same complex but in different  
partitions  
Detects and rejects Itanium cell boards mixing in a partition with PA-RISC cell boards  
Support all system management tools available with sx1000 systems  
FRU isolation and event ID reporting as enabled by the hardware and manageability firmware  
Cell OLAD (COLAD) of cells with noninterleaved memory. PA-RISC I/O card OLAD support  
requirements and design are the same as on sx1000 systems.  
Support for link level retry with self-healing for crossbar and I/O links  
Maximum of eight processor cores per cell board, based on NVM part size of 12 MB  
Supports two processors per CPU module  
Supports mixing of specific processor versions after they are identified as being compatible  
by the program  
Dual-core configuration, deconfiguration. Support for 1, 2, and 4 GB DDR-II DIMMs  
Support for mixed DIMMs on a cell  
Support for nonuniform memory configurations within a partition and address parity  
checking on DIMMs (no address ECC)  
Support for configuring and deconfiguring DIMMs in increments of two  
Enforcement of DIMM loading order  
PCI-X 2.0 (266 MHz) based I/O attach (PCI compatible)  
Support for all PCI-X and PCI cards supported by respective sx1000 systems  
Support for I/O slot doorbells and latches  
Elimination of Superdome core I/O card for Superdome/sx2000 console  
Server Configurations  
See the HP System Partitions Guide for information about proper configurations.  
Basic Configuration Rules  
Single-Cabinet System:  
Two to 32 CPUs per complex with single-core processors  
Four to 64 CPU cores per complex with dual-core processors  
Server Configurations  
47  
     
Minimum of one cell  
Maximum of eight cells  
Dual-Cabinet System:  
Six to 64 CPU cores per complex with single-core processors  
Twelve to 128 CPU cores per complex with dual-core processors  
Minimum of three cells  
Maximum of 16 cells  
No master/checker support for dual-core processors  
The rules for mixing processors are as follows:  
No mixing of frequencies on a cell or within a partition  
No mixing of cache sizes on a cell or within a partition  
No mixing of major steppings on a cell or within a partition  
Support for Itanium and PA-RISC processors within the same complex, but not in the same  
partition  
Maximum of 32 DIMMs per cell  
32 GB memory per cell with 256 MB SDRAMs (1 GB DIMMs)  
64 GB memory per cell with 512 MB SDRAMs (2 GB DIMMs)  
DIMM mixing is allowed  
Server Errors  
To support high availability (HA), the new chipset includes functionality for error correction,  
detection and recovery. Errors in the new chipset are divided into the following categories:  
nPartition access  
Hardware correctable  
Global shared memory  
Hardware uncorrectable  
Fatal blocking time-out  
Deadlock recovery errors  
These categories are listed in increasing severity, ranging from hardware partition access errors,  
which are caused by software or hardware running in another partition, to deadlock recovery  
errors, which indicate a serious hardware failure that requires a reset of the cell to recover. The  
term software refers to privileged code, such as PDC or the OS, but not to user code. The sx2000  
chipset supports the nPartition concept, where user and software errors in one nPartition cannot  
affect another nPartition.  
48  
Overview  
       
2 System Specifications  
The following specifications are based on ASHRAE Class 1. Class 1 is a controlled computer  
room environment, in which products are subject to controlled temperature and humidity  
extremes. Throughout this chapter each specification is defined as thoroughly as possible to  
ensure that all data is considered, to ensure a successful site preparation and system installation.  
For more information, see Generalized Site Preparation Guide, Second Edition, part number  
5991-5990, at the http://docs.hp.com website.  
Dimensions and Weights  
This section contains server component dimensions and weights for the system.  
Component Dimensions  
Table 2-1 lists the dimensions for the cabinet and components. Table 2-2 list the dimensions for  
optional I/O expansion (IOX) cabinets.  
Table 2-1 Server Component Dimensions  
Component  
Width (in/cm)  
Depth (in/cm)  
Height (in/cm)  
Maximum Quantity  
per Cabinet  
Cabinet  
30/76.2  
48/121.9  
77.2/195.6  
3.0/7.6  
1
1
Cell board  
16.5/41.9  
20.0/50.2  
10.125/25.7  
17.6/44.7  
23.75/60.3  
17.5/44.4  
11.0/27.9  
8
1
Cell power board (CPB) 16.5/41.9  
3.0/7.6  
8
I/O backplane  
Master I/O backplane  
I/O card cage  
PDCA  
11.0/27.9  
3.25/8.3  
12.0/30.5  
7.5/19.0  
1
1
4
2
1.5/3.8  
8.38/21.3  
9.75/24.3  
1
SD16 is limited to a maximum of 4.  
Table 2-2 I/O Expansion Cabinet Component Dimensions  
Cabinet Type  
E33  
Height (in/cm)  
63.5/161  
Width (in/cm)  
23.5/59.7  
Depth (in/cm)  
77.3/196.0  
36.5/92.7  
E41  
77.5/197  
23.5/59.7  
Component Weights  
Table 2-3 lists the server and component weights. Table 2-4 lists the weights for optional IOX  
cabinets.  
NOTE: To determine the weight of the Support Management Station (SMS) and any console  
used with this server, see the related documents.  
Table 2-3 System Component Weights  
Component  
Weight Per Unit  
(lb/kg)  
Quantity  
Weight (lb/kg)  
1
Chassis  
745.17/338.1  
30.96/14.04  
1
8
745.17/338.10  
247.68/112.32  
Cell board without power board and DIMMs  
Dimensions and Weights  
49  
                   
Table 2-3 System Component Weights (continued)  
Component  
Weight Per Unit  
Quantity  
Weight (lb/kg)  
(lb/kg)  
Cell power board  
DIMMs  
8.50/3.86  
0.20/0.09  
3.83/1.74  
26.00/11.80  
36.50/16.56  
0.45/0.20  
8
68.00/30.88  
51.20/23.04  
23.00/10.44  
52.00/23.59  
146.00/66.24  
21.60/9.80  
256  
6
Bulk power supply  
PDCA  
2
I/O card cage  
4
I/O cards  
48  
1
2
Fully configured server (SD32 cabinet)  
1354.65/614.41  
1
2
The listed weight for a chassis includes the weight of all components not listed in Table 2-3.  
The listed weight for a fully configured cabinet includes all components and quantities listed in Table 2-3.  
Table 2-4 IOX Cabinet Weights  
1
Component  
Weight (lb/kg)  
Fully configured cabinet  
I/O card cage  
Chassis  
1104.9/502.2  
36.50/16.56  
264/120  
1
The listed weight for a fully configured cabinet includes all items installed in a 1.6 meter cabinet. Add approximately  
11 pounds when using a 1.9 meter cabinet.  
Shipping Dimensions and Weights  
Table 2-5 lists the dimensions and weights of the SMS and a single cabinet with shipping pallet.  
Table 2-5 Miscellaneous Dimensions and Weights  
Equipment  
Width (in/cm)  
Depth/Length (in/cm) Height (in/cm)  
Weight (lb/kg)  
System on shipping  
39.00/99.06  
48.63/123.5  
48.00/121.9  
48.00/121.9  
73.25/186.7  
62.00/157.5  
88.25/224.1  
1424.66/648.67  
123  
pallet  
Blowers and frame on  
shipping pallet  
40.00/101.6  
38.00/96.52  
99.2/45.01  
1115/505.8  
IOX cabinet on shipping  
4
pallet  
1
2
3
4
Shipping box, pallet, ramp, and container add approximately 116 pounds (52.62 kg) to the total system weight.  
Blowers and frame are shipped on a separate pallet.  
Size and number of miscellaneous pallets are determined by the equipment ordered by the customer.  
Assumes no I/O cards or cables installed. The shipping kit and pallet and all I/O cards add approximately 209 pounds  
(94.80 kg) to the total weight.  
Electrical Specifications  
The following specifications are based on ASHRAE Class 1. Class 1 is a controlled computer  
room environment, in which products are subject to controlled temperature and humidity  
extremes. Throughout this chapter each specification is defined as thoroughly as possible to  
ensure that all data is considered to ensure a successful site preparation and system installation.  
50  
System Specifications  
           
Grounding  
The site building must provide a safety ground or protective earth for each ac service entrance  
to all cabinets.  
WARNING! This equipment is Class 1 and requires full implementation of the grounding  
scheme to all equipment connections. Failure to attach to protective earth results in loss of  
regulatory compliance and creates a possible safety hazard.  
Circuit Breaker  
Each cabinet using a 3-phase, 4-wire input requires a dedicated circuit breaker to support the  
Marked Electrical current of 44 A per phase. The facility electrician and local service codes  
determine proper circuit breaker selection.  
Each cabinet using a 3-phase, 5-wire input requires a dedicated circuit breaker to support the  
Marked Electrical current of 24 A per phase. The facility electrician and local service codes  
determine proper circuit breaker selection.  
NOTE: When using the minimum-size breaker, always choose circuit breakers with the  
maximum allowed trip delay to avoid nuisance tripping.  
Power Options  
Table 2-6 describes the available power options. Table 2-7 provides details about the available  
options. The options listed are consistent with options for earlier Superdome systems.  
Table 2-6 Available Power Options  
Option  
Source  
Type  
Source Voltage  
(Nominal)  
PDCA  
Input Current Per  
Power Receptacle Required  
Required Phase 200 to 240 V  
1
ac  
6
3-phase  
Voltage range 200 to 240 4-wire  
V ac, phase-to-phase, 50  
Hz/60 Hz  
44 A maximum per Connector and plug provided  
phase  
with a 2.5 meter (8.2 feet) power  
cable. Electrician must hardwire  
the receptacle to 60 A site power.  
7
3-phase  
Voltage range 200 to 240 5-wire  
V ac, phase-to-neutral,  
50 Hz/60 Hz  
24 A maximum per Connector and plug provided  
phase  
with a 2.5 meter (8.2 feet) power  
cable. Electrician must hardwire  
the receptacle to 32 A site power.  
1
A dedicated branch circuit is required for each PDCA installed.  
Table 2-7 Option 6 and 7 Specifics  
PDCA Part  
Number  
Attached Power Cord  
Attached Plug  
Receptacle Required  
A5201-69023  
(Option 6)  
OLFLEX 190 (PN 600804) is a 2.5 meter (8.2 feet)  
multiconductor, 600 V, 90˚ C, UL and CSA approved,  
oil resistant flexible cable (8 AWG 60 A capacity).  
Mennekes ME  
460P9 (60 A  
capacity)  
Mennekes ME  
460R9 (60 A  
capacity)  
A5201-69024  
(Option 7)  
H07RN-F (OLFLEX PN 1600130) is a 2.5 meter (8.2 feet) Mennekes ME  
heavy-duty neoprene-jacketed harmonized European 532P6-14 (32 A  
Mennekes ME  
532R6-1500 (32 A  
capacity)  
flexible cable (4 mm2 32 A capacity).  
capacity)  
Electrical Specifications  
51  
             
NOTE: A qualified electrician must wire the PDCA receptacle to site power using copper wire  
and in compliance with all local codes.  
All branch circuits used within a complex must be connected together to form a common ground.  
All power sources such as transformers, UPSs, and other sources, must be connected together  
to form a common ground.  
When only one PDCA is installed in a system cabinet, it must be installed as PDCA 0. For the  
location of PDCA 0, see Figure 2-1.  
NOTE: When wiring a PDCA, phase rotation is unimportant. When using two PDCAs, however,  
the rotation must be consistent for both.  
Figure 2-1 PDCA Locations  
System Power Requirements  
Table 2-8 and Table 2-9 list the ac power requirements for an HP Integrity Superdome/sx2000  
system. These tables provide information to help determine the amount of ac power needed for  
your computer room.  
Table 2-8 Power Requirements (Without SMS)  
Requirement  
Value  
Comments  
Nominal input voltage  
200/208/220/230/240  
V ac rms  
Input voltage range (minimum to maximum)  
200 to 240 V ac rms Autoselecting (measured at input  
terminals)  
Frequency range (minimum to maximum)  
Number of phases  
50/60 Hz  
3
Maximum inrush current  
90 A (peak)  
Product label maximum current, 3-phase, 4-wire  
Product label maximum current, 3-phase, 5-wire  
44 A rms  
24 A rms  
Per-phase at 200 to 240 V ac  
Per-phase at 200 to 240 V ac  
52  
System Specifications  
         
Table 2-8 Power Requirements (Without SMS) (continued)  
Requirement  
Value  
Comments  
Power factor correction  
Ground leakage current (mA)  
0.95 minimum  
> 3.5 mA  
See the following WARNING.  
WARNING! Beware of shock hazard. When connecting or removing input power wiring, always  
connect the ground wire first and disconnect it last.  
Component Power Requirements  
Table 2-9 Component Power Requirements (Without SMS)  
1
Component  
Component Power Required 50 Hz to 60 Hz  
Maximum configuration for SD16  
Maximum configuration for SD32  
Cell board  
8,200 VA  
12,196 VA  
900 VA  
I/O card cage  
500 VA  
1
A number to use for planning, to allow for enough power to upgrade through the life of the system.  
IOX Cabinet Power Requirements  
The IOX requires a single-phase 200-240 V ac input. Table 2-10 lists the ac power requirements  
for the IOX cabinet.  
NOTE: The IOX accommodates two ac inputs for redundancy.  
Table 2-10 I/O Expansion Cabinet Power Requirements (Without SMS)  
Requirement  
Value  
Nominal input voltage  
200/208/220/230/240 V ac rms  
Input voltage range (minimum to maximum)  
Frequency range (minimum to maximum)  
Number of phases  
170-264 V ac rms  
50/60 Hz  
1
Marked electrical input current  
Maximum inrush current  
Power factor correction  
16 A  
60 A (peak)  
0.95 minimum  
Table 2-11 I/O Expansion Cabinet Component Power Requirements  
Component  
Component Power Required 50 Hz to 60 Hz  
Fully configured cabinet  
I/O card cage  
ICE  
3200 VA  
500 VA  
600 VA  
IOX Cabinet Power Cords  
Table 2-12 lists the power cords for the IOX cabinet.  
Electrical Specifications  
53  
             
Table 2-12 I/O Expansion Cabinet ac Power Cords  
Part Number A5499AZ  
Where Used  
Connector Type  
L6-20  
-001  
-002  
North America  
International  
IEC 309  
Environmental Requirements  
This section provides the environmental, power dissipation, noise emission, and air flow  
specifications.  
Temperature and Humidity Specifications  
1
Table 2-13 Operational Physical Environment Requirements  
2
3
Temperature (Dry Bulb οC)  
Relative Humidity % Noncondensing Dew Point  
Rate of Change (˚C/hr,  
max)  
4,5  
6
5
6
Allowable  
Recommended  
Allowable  
20 to 80  
Recommended  
40 to 55  
15 to 32 (59ο to  
90οF)  
20 to 25 (68ο to  
77οF)  
17  
5
1
2
The maximum elevation for the operating environment is 3050 meters.  
Dry bulb temperature is the regular ambient temperature. Derate maximum dry bulb temperature 1˚C/300 m above  
900 m.  
3
4
Must be a noncondensing environment.  
With installed media, the minimum temperature is 10˚C and maximum relative humidity is limited to 80%. Specific  
media requirements can vary.  
5
6
Allowable equipment design extremes as measured at the equipment inlet.  
Recommended target facility design and operational range.  
Table 2-14 Nonoperational Physical Environment Requirements  
Storage  
Powered Off (Installed)  
Temp (˚C, dry  
bulb - regular  
ambient  
Rel Hum %;  
Noncondensing  
Dew point  
(max)  
Temp (˚C, dry bulb Rel Hum %  
Dew point  
(max)  
- regular ambient  
temperature)  
Noncondensing  
temperature)  
-40 to 60  
8 to 90  
32  
5 to 45  
8 to 90  
29  
NOTE: The values in Table 2-14 meet or exceed all ASHRAE specifications.  
Power Dissipation  
Table 2-15 lists the power requirements by configuration (number of cell boards, amount of  
memory per cell, and number of I/O chassis) for the HP Integrity Superdome/sx2000.  
The table contains two columns of power numbers expressed in watts. The Breaker Power column  
lists the power used to size the wall breaker at the installation site. The Typical Power column  
lists typical power. Typical power numbers can be used to assess the average utility cost of  
54  
System Specifications  
               
cooling and electrical power. Table 2-15 also lists the recommended breaker sizes for 4-wire and  
5-wire sources.  
WARNING! Do not connect a 380 to 415 V ac supply to a 4-wire PDCA. This is a safety hazard  
and results in damage to the product. Line-to-line or phase-to-phase voltage measured at 380 to  
415 V ac must always be connected using a 5-wire PDCA.  
1
Table 2-15 HP Integrity Superdome/sx2000 Dual-Core CPU Configurations  
Cells (in  
cabinet)  
Memory (DIMMs I/O (fully  
Typical Power  
(Watts)  
Cooling (BTU/Hr)  
Breaker Power  
2
per cell)  
populated)  
(Watts)  
8
8
8
8
8
8
6
6
6
6
6
6
4
4
4
4
4
4
2
2
2
32  
16  
8
4
2
4
2
4
2
4
2
4
2
4
2
4
2
4
2
4
2
2
2
2
9490  
7620  
8140  
7180  
7620  
6660  
7320  
6360  
7000  
6040  
6680  
5720  
6170  
5210  
5960  
5000  
5760  
4800  
4010  
3890  
3780  
32382  
26001  
27776  
24500  
26001  
22726  
24978  
21702  
23886  
20610  
22794  
19518  
21054  
17778  
20337  
17061  
19655  
16379  
13683  
13274  
12898  
11957  
9601  
10256  
9047  
9601  
8391  
9223  
8013  
8820  
7610  
8417  
7207  
7774  
6564  
7509  
6300  
7257  
6048  
5052  
4901  
4763  
8
4
4
16  
16  
8
8
4
4
16  
16  
8
8
4
4
16  
8
4
1
Values in Table 2-15 are based on 25 W load I/O cards, 1 GB DIMMs and four Intel® Itanium ® dual-core processors  
with 18 MB or 24 MB cache per cell board or four PA-RISC processors with 64 MB.  
These numbers are valid only for the specific configurations shown. Any upgrades can require a change to the  
breaker size. A 5-wire source uses a 4-pole breaker, and a 4-wire source uses a 3-pole breaker. The protective earth  
(PE) ground wire is not switched.  
2
1
Table 2-16 HP Integrity Superdome/sx2000 Single-Core CPU Configurations  
Cell (in cabinet) Memory (DIMMs  
per Cell)  
I/O (fully  
populated)  
Typical Power  
(Watts)  
Cooling (BTU/Hr)  
Breaker Power  
2
(Watts)  
8
8
8
32  
16  
8
4
2
4
9130  
7260  
7783  
31181  
24794  
26580  
11503  
9147  
9806  
Environmental Requirements  
55  
   
Table 2-16 HP Integrity Superdome/sx2000 Single-Core CPU Configurations (continued)  
Cell (in cabinet) Memory (DIMMs  
per Cell)  
I/O (fully  
populated)  
Typical Power  
(Watts)  
Cooling (BTU/Hr)  
Breaker Power  
2
(Watts)  
8
8
8
6
6
6
6
6
6
4
4
4
4
4
4
2
2
2
8
2
4
2
4
2
4
2
4
2
4
2
4
2
4
2
2
2
2
6823  
7260  
6300  
6968  
6008  
6640  
5680  
6325  
5365  
5813  
4853  
4647  
3687  
5382  
4422  
3656  
3534  
3423  
23302  
24794  
21516  
23797  
20518  
22677  
19398  
21601  
18322  
19852  
16574  
15870  
12592  
18380  
15102  
12486  
12069  
11690  
8596  
9147  
7938  
8779  
7570  
8366  
7156  
7969  
6759  
7324  
6114  
5855  
4645  
6781  
5571  
4606  
4453  
4313  
4
4
16  
16  
8
8
4
4
16  
16  
8
8
4
4
16  
8
4
1
Values in Table 2-15 are based on 25 W load I/O cards, 1 GB DIMMs and four Intel® Itanium ® single-core processors  
with 9 MB cache per cell board.  
2
These numbers are valid only for the specific configurations shown. Any upgrades can require a change to the  
breaker size. A 5-wire source uses a 4-pole breaker, and a 4-wire source uses a 3-pole breaker. The protective earth  
(PE) ground wire is not switched.  
Acoustic Noise Specification  
The acoustic noise specifications are as follows:  
8.2 bel (sound power level)  
65.1 dBA (sound pressure level at operator position)  
These levels are appropriate for dedicated computer room environments, not office environments.  
You must understand the acoustic noise specifications relative to operator positions within the  
computer room when adding HP Integrity Superdome/sx2000 systems to computer rooms with  
existing noise sources.  
Airflow  
HP Integrity Superdome/sx2000 systems require the cabinet air intake temperature to be between  
15οC and 32οC (59οF and 89.6οF) at 2900 CFM.  
Figure 2-2 illustrates the location of the inlet and outlet air ducts on a single cabinet.  
56  
System Specifications  
           
NOTE: Approximately 5% of the system airflow draws from the rear of the system and exits  
the top of the system.  
Figure 2-2 Airflow Diagram  
A thermal report for the HP Integrity Superdome/sx2000 server is provided in Table 2-17  
Table 2-17 Physical Environmental Specifications  
Condition  
2
Voltage  
Nominal Airflow  
Maximum  
Airflow at  
Weight  
Overall System  
Dimensions  
200–240 V ac  
1,2  
32oC  
(W X D X H)  
Typical Heat  
Release  
Description  
Watts  
3423  
CFM m3/hr  
CFM m3/hr  
pounds kg  
in  
mm  
Minimum  
Configuration  
2900  
2900  
2900  
5.0  
5.0  
5.0  
2900  
2900  
2900  
5.0  
5.0  
5.0  
926.3  
420.3  
30x48x77.2 76.2x121.9x195.6  
30x48x77.2 76.2x121.9x195.6  
30x48x77.2 76.2x121.9x195.6  
Maximum  
Configuration  
9130  
6968  
1241.2 563.2  
1135.2 515.1  
Typical  
Configuration  
ASHRAE Class  
1
Minimum  
2 Cell, 4 DIMM, 2 I/O  
8 Cell, 32 DIMM, 4 I/O  
6 Cell, 16 DIMM, 4 I/O  
3
3
3
figuration  
Maximum  
Configuration  
Typical  
Configuration  
Environmental Requirements  
57  
       
1
2
Derate maximum dry bulb temperature 1oC/300 m above 900 m.  
The system deviates slightly from front to top and rear airflow protocol. Approximately 5 percent of the system  
airflow is drawn in from the rear of the system. See Figure 2-2 (page 57) for more details.  
See Table 2-15 (page 55) and Table 2-16 (page 55) for additional details regarding minimum, maximum, and typical  
configurations.  
3
58  
System Specifications  
3 Installing the System  
This chapter describes installation of HP Integrity Superdome/sx2000 and HP 9000/sx2000 systems.  
Installers must have received adequate training, be knowledgeable about the product, and have  
a good overall background in electronics and customer hardware installation.  
Introduction  
The instructions in this chapter are written for Customer Support Consultants (CSC) who are  
experienced at installing complex systems. This chapter provides details about each step in the  
sx2000 installation process. Some steps must be performed before others can be completed  
successfully. To avoid undoing and redoing an installation step, follow the installation sequences  
outlined in this chapter.  
Communications Interference  
HP system compliance tests are conducted with HP supported peripheral devices and shielded  
cables, such as those received with the system. The system meets interference requirements of  
all countries in which it is sold. These requirements provide reasonable protection against  
interference with radio and television communications.  
Installing and using the system in strict accordance with instructions provided by HP minimizes  
the chances that the system will cause radio or television interference. However, HP does not  
guarantee that the system will not interfere with radio and television reception.  
Take the following precautions:  
Use only shielded cables.  
Install and route the cables according to the instructions provided.  
Ensure that all cable connector screws are firmly tightened.  
Use only HP supported peripheral devices.  
Ensure that all panels and cover plates are in place and secure before turning on the system.  
Electrostatic Discharge  
HP systems and peripherals contain assemblies and components that are sensitive to electrostatic  
discharge (ESD).  
CAUTION: Carefully observe the precautions and recommended procedures in this document  
to prevent component damage from static electricity.  
Take the following precautions:  
Always wear a grounded wrist strap when working on or around system components.  
Treat all assemblies, components, and interface connections as static-sensitive.  
When unpacking cards, interfaces, and other accessories that are packaged separately from  
the system, keep the accessories in their non-conductive plastic bags until you are ready to  
install them.  
Before removing or replacing any components or installing any accessories in the system,  
select a work area in which potential static sources are minimized, preferably an antistatic  
work station.  
Avoid working in carpeted areas and keep body movement to a minimum while installing  
accessories.  
Introduction  
59  
             
Public Telecommunications Network Connection  
Instructions are issued to the installation site that modems cannot be connected to public  
telecommunications networks until full datacomm licenses are received for the country of  
installation. Some countries do not require datacomm licenses. The product regulations engineer  
must review beta site locations, and if datacomm licenses are not complete, ensure that the  
installation site is notified officially and in writing that the product cannot be connected to public  
telecommunications networks until the license is received.  
Unpacking and Inspecting the System  
This section describes what to do before unpacking the server and how to unpack the system  
itself.  
WARNING! Do not attempt to move the cabinet, packed or unpacked, up or down an incline  
of more than 15 degrees.  
Verifying Site Preparation  
Verifying site preparation includes gathering LAN information and verifying electrical  
requirements.  
Gathering LAN Information  
The Support Management Station (SMS) connects to the customers LAN. Determine the  
appropriate IP address.  
Verifying Electrical Requirements  
The site must be verified for proper grounding and electrical requirements prior to the system  
being shipped to the customer as part of the site preparation. Before unpacking and installing  
the system, verify with the customer that grounding specifications and power requirements are  
met.  
Checking the Inventory  
The sales order packing slip lists all equipment shipped from HP. Use this packing slip to verify  
that all equipment has arrived at the customer site.  
NOTE: To identify each item by part number, see the sales order packing slip.  
One of the large overpack containers is labeled “Open Me First.” This box contains the Solution  
Information Manual and DDCAs. The unpacking instructions are in the plastic bag taped to the  
cabinet.  
The following items are in other containers. Check them against the packing list:  
Power distribution control assembly (PDCA) and power cord  
Two blower housings per cabinet  
Four blowers per cabinet  
Four side skins with related attachment hardware  
Cabinet blower bezels and front door assemblies  
Support Management Station  
Cables  
Optional equipment  
Boot device with the operating system installed  
60  
Installing the System  
                 
Inspecting the Shipping Containers for Damage  
HP shipping containers are designed to protect their contents under normal shipping conditions.  
After the equipment arrives at the customer site, carefully inspect each carton for signs of shipping  
damage.  
WARNING! Do not attempt to move the cabinet, packed or unpacked, up or down an incline  
of more than 15 degrees.  
A tilt indicator is installed on the back and side of the cabinet shipping container (Figure 3-1  
(page 61)). If the container is tilted to an angle that can cause equipment damage, the beads in  
the indicator shift positions (Figure 3-2 (page 62)). If a carton has received a physical shock and  
the tilt indicator is in an abnormal condition, visually inspect the unit for any signs of damage.  
If damage is found, document the damage with photographs and contact the transport carrier  
immediately.  
Figure 3-1 Normal Tilt Indicator  
Unpacking and Inspecting the System  
61  
       
Figure 3-2 Abnormal Tilt Indicator  
NOTE: If the tilt indicator shows that an abnormal shipping condition has occurred, write  
“possible hidden damage” on the bill of lading and keep the packaging.  
Inspection Precautions  
When the shipment arrives, check each container against the carrier's bill of lading. Inspect  
the exterior of each container immediately for mishandling or damage during transit. If any  
of the containers are damaged, request the carrier's agent be present when the container is  
opened.  
When unpacking the containers, inspect each item for external damage. Look for broken  
controls and connectors, dented corners, scratches, bent panels, and loose components.  
NOTE: HP recommends keeping the shipping container and the packaging material. If it  
becomes necessary to repackage the cabinet, the original packing material is necessary.  
If discarding the shipping container or packaging material, dispose of them in an environmentally  
responsible manner (recycle, if possible).  
Claims Procedures  
If the shipment is incomplete, if the equipment is damaged, or it fails to meet specifications,  
notify the nearest HP Sales and Service Office. If damage occurred in transit, notify the carrier  
as well.  
HP will arrange for replacement or repair without waiting for settlement of claims against the  
carrier. In the event of damage in transit, retain the packing container and packaging materials  
for inspection.  
Unpacking and Inspecting Hardware Components  
This section describes the procedures for opening the shipping container and unpacking and  
inspecting the cabinet.  
62  
Installing the System  
           
Tools Required  
The following tools are required to unpack and install the system:  
Standard hand tools, such as a adjustable-end wrench  
ESD grounding strap  
Digital voltmeter capable of reading ac and dc voltages  
1/2-inch socket wrench  
9/16-inch wrench  
#2 Phillips screwdriver  
Flathead screwdriver  
Wire cutters or utility knife  
Safety goggles or glasses  
T-10, T-15, T-20, T-25, and T-30 Torx drivers  
9-pin to 25-pin serial cable (HP part number 24542G)  
9-pin to 9-pin null modem cable  
Unpacking the Cabinet  
WARNING! Use three people to unpack the cabinet safely.  
HP recommends removing the cardboard shipping container before moving the cabinet into the  
computer room.  
NOTE: If unpacking the cabinet in the computer room, be sure to position it so that it can be  
moved into its final position easily. Notice that the front of the cabinet (Figure 3-3) is the side  
with the label showing how to align the ramps.  
To unpack the cabinet, follow these steps:  
Unpacking and Inspecting the System  
63  
         
1. Position the packaged cabinet so that a clear area about three times the length of the package  
(about 12 feet or 3.66 m) is available in front of the unit, and at least 2 feet (0.61 m) are  
available on the sides.  
Figure 3-3 Front of Cabinet Container  
WARNING! Do not stand directly in front of the strapping while cutting it. Hold the band  
above the intended cut and wear protective glasses. These bands are under tension. When  
cut, they spring back and can cause serious eye injury.  
2. Cut the plastic polystrap bands around the shipping container (Figure 3-4 (page 64)).  
Figure 3-4 Cutting the Polystrap Bands  
64  
Installing the System  
   
3. Lift the cardboard corrugated top cap off the shipping box.  
4. Remove the corrugated sleeves surrounding the cabinet.  
CAUTION: Cut the plastic wrapping material off rather than pulling it off. Pulling the  
plastic covering off creates an ESD hazard to the hardware.  
5. Remove the stretch wrap, the front and rear top foam inserts, and the four corner inserts  
from the cabinet.  
6. Remove the ramps from the pallet and set them aside (Figure 3-5 (page 65)).  
Figure 3-5 Removing the Ramps from the Pallet  
Unpacking and Inspecting the System  
65  
   
7. Remove the plastic antistatic bag by lifting it straight up off the cabinet. If the cabinet or any  
components are damaged, follow the claims procedure. Some damage can be repaired by  
replacing the damaged part. If you find extensive damage, you might need to repack and  
return the entire cabinet to HP.  
Inspecting the Cabinet  
To inspect the cabinet exterior for signs of shipping damage, follow these steps:  
1. Look at the top and sides for dents, warping, or scratches.  
2. Verify that the power supply mounting screws are in place and locked (Figure 3-6).  
Figure 3-6 Power Supply Mounting Screws Location  
3. Verify that the I/O chassis mounting screws are in place and secure (Figure 3-7).  
Inspect all components for signs of shifting during shipment or any signs of damage.  
Figure 3-7 I/O Chassis Mounting Screws  
66  
Installing the System  
     
Unpacking and Inspecting the System  
67  
Moving the Cabinet Off the Pallet  
1. Remove the shipping strap that holds the BPSs in place during shipping (Figure 3-8  
Failure to remove the shipping strap will obstruct air flow into the BPS and FEPS.  
Figure 3-8 Shipping Strap Location  
2. Remove the pallet mounting brackets and pads on the side of the pallet where the ramp  
slots are located (Figure 3-9).  
68  
Installing the System  
 
Figure 3-9 Removing the Mounting Brackets  
WARNING! Do not remove the bolts on the mounting brackets that attach to the pallet.  
These bolts prevent the cabinet from rolling off the back of the pallet.  
3. On the other side of the pallet, remove only the bolt on each mounting bracket that is attached  
to the cabinet.  
4. Insert the ramps into the slots on the pallet.  
CAUTION: Make sure the ramps are parallel and aligned (Figure 3-10).  
The casters on the cabinet must roll unobstructed onto the ramp.  
Unpacking and Inspecting the System  
69  
 
Figure 3-10 Positioning the Ramps  
WARNING! Do not attempt to roll a cabinet without help. The cabinet can weigh as  
much as 1400 pounds (635 kg). Three people are required to roll the cabinet off the pallet.  
Position one person at the rear of the cabinet and one person on each side.  
WARNING! Do not attempt to move the cabinet, either packed or unpacked, up or down  
an incline of more than 15 degrees.  
5. Carefully roll the cabinet down the ramp (Figure 3-11).  
Figure 3-11 Rolling the Cabinet Down the Ramp  
6. Unpack any other cabinets that were shipped.  
70  
Installing the System  
   
Unpacking the PDCA  
At least one PDCA ships with the system. In some cases, the customer might order two PDCAs,  
the second to be used as a backup power source. Unpack the PDCA and ensure it has the power  
cord option for installation.  
Several power cord options are available for the PDCAs. Only options 6 and 7 are currently  
available in new system configurations (Table 3-1 (page 71)). Table 3-2 (page 71) details options  
6 and 7.  
Table 3-1 Available Power Options  
Option  
Source  
Type  
Source Voltage  
(Nominal)  
PDCA  
Required  
Input Current Per  
Power Receptacle Required  
Phase 200 to 240 V  
1
ac  
6
3-phase  
3-phase  
Voltage range 200 to  
240 V ac,  
4-wire  
44 A maximum per Connector and plug provided  
phase  
with a 2.5 m (8.2 feet) power  
cable. An electrician must  
hardwire receptacle to 60 A site  
power.  
phase-to-phase, 50 Hz  
/ 60 Hz  
7
Voltage range 200 to  
240 V ac,  
phase-to-neutral, 50 Hz  
/ 60 Hz  
5-wire  
24 A maximum per Connector and plug provided  
phase  
with a 2.5 m (8.2 feet) power  
cable. An electrician must  
hardwire receptacle to 32 A site  
power.  
1
A dedicated branch circuit is required for each PDCA installed.  
Table 3-2 Power Cord Option 6 and 7 Details  
PDCA Part Number  
Attached Power Cord  
Attached Plug  
Mennekes ME 460P9  
Receptacle Required  
A5201-69023 (Option OLFLEX 190 (PN 600804) is a 2.5 meter  
6)  
Mennekes ME 460R9 (60  
A capacity)  
multiconductor, 600 V, 90˚C, UL and CSA (60 A capacity)  
approved, oil resistant flexible cable. (8  
AWG 60 A capacity)  
A5201-69024 (Option H07RN-F (OLFLEX PN 1600130) is a 2.5 Mennekes ME  
Mennekes ME  
532R6-1500 (32 A  
capacity)  
7)  
meter heavy-duty neoprene jacketed  
harmonized European flexible cable. (4  
mm2 32 A capacity)  
532P6-14 (32 A  
capacity)  
Returning Equipment  
If the equipment is damaged, use the original packing material to repackage the cabinet for  
shipment. If the packing material is not available, contact the local HP Sales and Support Office  
regarding shipment.  
Before shipping, place a tag on the container or equipment to identify the owner and the service  
to be performed. Include the equipment model number and the full serial number, if applicable.  
The model number and the full serial number are printed on the system information labels located  
at the bottom front of the cabinet.  
WARNING! Do not attempt to push the loaded cabinet up the ramp onto the pallet. Three  
people are required to push the cabinet up the ramp and position it on the pallet. Inspect the  
condition of the loading and unloading ramp before use.  
Repackaging  
To repackage the cabinet, follow these steps:  
1. Assemble the HP packing materials that came with the cabinet.  
2. Carefully roll the cabinet up the ramp.  
3. Attach the pallet mounting brackets to the pallet and the cabinet.  
Unpacking and Inspecting the System  
71  
             
4. Reattach the ramps to the pallet.  
5. Replace the plastic antistatic bag and foam inserts.  
6. Replace the cardboard surrounding the cabinet.  
7. Replace the cardboard caps.  
8. Secure the assembly to the pallet with straps.  
The cabinet is now ready for shipment.  
Setting Up the System  
After a site is prepared, the system is unpacked, and all components are inspected, the system  
can be prepared for booting.  
Moving the System and Related Equipment to the Installation Site  
Carefully move the cabinets and related equipment to the installation site but not into the final  
location. If the system is to be placed at the end of a row, you must add side bezels before  
positioning the cabinet in its final location. Check the path from where the system was unpacked  
to its final destination to make sure the way is clear and free of obstructions.  
WARNING! If the cabinet must be moved up ramps, be sure to maneuver it using three people.  
Unpacking and Installing the Blower Housings and Blowers  
Each cabinet contains two blower housings and four blowers. Although similar in size, the blower  
housings for each cabinet are not the same; one has a connector to which the other attaches. To  
unpack and install the housings and blowers, follow these steps:  
1. Unpack the housings from the cardboard box and set them aside.  
The rear housing is labeled Blower 3 Blower 2. The front housing is labeled Blower 0 Blower  
1.  
CAUTION: Do not lift the housing by the frame (Figure 3-12).  
Figure 3-12 Blower Housing Frame  
72  
Installing the System  
             
2. Remove the cardboard from the blower housing (Figure 3-13).  
This cardboard protects the housing baffle during shipping. If it is not removed, the fans  
can not work properly.  
Figure 3-13 Removing Protective Cardboard from the Housing  
NOTE: Double-check that the protective cardboard has been removed.  
3. Using the handles on the housing labeled Blower 3 Blower 2, align the edge of the housing  
over the edge at the top rear of the cabinet, and slide it into place until the connectors at the  
back of each housing are fully mated (Figure 3-14). Then tighten the thumbscrews at the  
front of the housing.  
Figure 3-14 Installing the Rear Blower Housing  
Setting Up the System  
73  
   
4. Using the handles on the housing labeled Blower 0 Blower 1, align the edge of the housing  
over the edge at the top front of the cabinet, and slide it into place until the connectors at  
the back of each housing are fully mated (Figure 3-15). Then tighten the thumbscrews at the  
front of the housing.  
Figure 3-15 Installing the Front Blower Housing  
5. Unpack each of the four blowers.  
6. Insert each of the four blowers into place in the blower housings with the thumbscrews at  
the bottom (Figure 3-16).  
74  
Installing the System  
 
Figure 3-16 Installing the Blowers  
7. Tighten the thumbscrews at the front of each blower.  
8. If required, install housings on any other cabinets that were shipped with the system.  
Attaching the Side Skins and Blower Side Bezels  
Two cosmetic side panels affix to the left and right sides of the system. In addition, each system  
has bezels that cover the sides of the blowers.  
IMPORTANT: Be sure to attach the side skins at this point in the installation sequence, especially  
if the cabinet is to be positioned at the end of a row of cabinets or between cabinets.  
Attaching the Side Skins  
Each system has four side skins: two front-side skins and two rear-side skins.  
NOTE: Attach side skins to the left side of cabinet 0 and the right side of cabinet 1 (if applicable).  
To attach the side skins, follow these steps:  
1. If not already done, remove the side skins from their boxes and protective coverings.  
2. From the end of the brackets at the back of the cabinet, position the side skin with the lap  
joint (Rear) over the top bracket and under the bottom bracket, and gently slide it into position  
Two skins are installed on each side of the cabinet: one has a lap joint (Rear) and one does  
not (Front). The side skins with the lap joint are marked Rear and the side skins without the  
lap joint are marked Front.  
Setting Up the System  
75  
         
Figure 3-17 Attaching the Rear Side Skin  
3. Attach the skin without the lap joint (Front) over the top bracket and under the bottom bracket  
and gently slide the skin into position.  
76  
Installing the System  
 
Figure 3-18 Attaching the Front Side Skins  
4. Push the side skins together, making sure the skins overlap at the lap joint.  
Attaching the Blower Side Bezels  
The bezels are held on at the top by the bezel lip, which fits over the top of the blower housing  
frame, and are secured at the bottom by tabs that fit into slots on the cabinet side panels  
Use the same procedure to attach the right and left blower side bezels.  
Setting Up the System  
77  
   
1. Place the side bezel slightly above the blower housing frame.  
Figure 3-19 Attaching the Side Bezels  
2. Align the lower bezel tabs to the slots in the side panels.  
3. Lower the bezel so the bezel top lip fits securely on the blower housing frame and the two  
lower tabs are fully inserted into the side panel slots.  
IMPORTANT: Use four screws to attach the side skins to the top and bottom brackets, except  
for the top bracket on the right side (facing the front of the cabinet). Do not attach the rear  
screw on that bracket. Insert all screws but do not tighten until all side skins are aligned.  
4. Using a T-10 driver, attach the screws to secure the side skins to the brackets.  
5. Repeat step 1 through step 4 for the skins on the other side of the cabinet.  
6. To secure the side bezels to the side skins, attach the blower bracket locks (HP part number  
A5201-00268) to the front and back blowers using a T-20 driver.  
There are two blower bracket locks on the front blowers and two on the rear.  
78  
Installing the System  
 
Attaching the Leveling Feet and Leveling the Cabinet  
After positioning the cabinet in its final location, to attach and adjust the leveling feet, follow  
these steps:  
1. Remove the leveling feet from their packages.  
2. Attach the leveling feet to the cabinet using four T-25 screws.  
Figure 3-20 Attaching the Leveling Feet  
3. Screw down each leveling foot clockwise until it is in firm contact with the floor. Adjust  
each foot until the cabinet is level.  
Installing the Front Door Bezels and the Front and Rear Blower Bezels  
Each cabinet has two doors, one at the front and one at the back. The back door is shipped on  
the chassis and requires no assembly. The front door, which is also shipped on the chassis,  
requires the assembly of two plastic bezels to its front surface and a cable from the door to the  
upper front bezel. In addition, you must install bezels that fit over the blowers at the front and  
back of the cabinet.  
Installing the Front Door Bezels  
The front door assembly includes two cosmetic covers, a control panel, and a key lock. To install  
the front door, you must connect the control panel ribbon cable from the chassis to the control  
panel and mount the two plastic bezels onto the metal chassis door.  
IMPORTANT: The procedure in this section requires two people and must be performed with  
the front metal chassis door open.  
To install the front door assembly, follow these steps:  
1. Open the front door, unsnap the screen, and remove all the filters held in place with Velcro.  
2. Remove the cabinet keys that are taped inside the top front door bezel.  
3. Insert the shoulder studs on the lower door bezel into the holes on the front door metal  
chassis (Figure 3-21).  
Setting Up the System  
79  
           
Figure 3-21 Installing the Lower Front Door Assembly  
4. Using a T-10 driver, secure the lower door bezel to the front door chassis with 10 of the  
screws provided. Insert all screws loosely, then tighten them after the bezel is aligned.  
5. While another person holds the upper door bezel near the door chassis, attach the ribbon  
cable to the back of the control panel on the bezel and tighten the two flathead screws  
80  
Installing the System  
 
Figure 3-22 Installing the Upper Front Door Assembly  
6. Feed the grounding strap through the door and attach it to the cabinet.  
7. Insert the shoulder studs on the upper door bezel into the holes on the front door metal  
chassis.  
8. Using a T-10 driver, secure the upper door bezel to the metal door with eight of the screws  
provided. Be sure to press down on the hinge side of the bezel while tightening the screws  
to prevent misalignment of the bezel.  
9. Reattach all filters removed in step 1.  
Installing the Rear Blower Bezel  
The rear blower bezel is a cosmetic cover for the blowers and is located above the rear door.  
To install the rear blower bezel, follow these steps:  
1. Open the rear cabinet door.  
NOTE: The latch is located on the right side of the door.  
2. Slide the bezel over the blower housing frame, hooking the lip of the bezel onto the cross  
support of the blower housing while holding the bottom of the bezel. Rotate the bezel  
downward from the top until the bottom snaps in place (Figure 3-23 (page 82)).  
Setting Up the System  
81  
     
Figure 3-23 Installing the Rear Blower Bezel  
3. Align the bezel over the nuts that are attached to the bracket at the rear of the cabinet.  
4. Using a T-20 driver, tighten the two captive screws on the lower flange of the bezel.  
NOTE: Tighten the screws securely to prevent them from interfering with the door.  
5. Close the cabinet rear door.  
Installing the Front Blower Bezel  
The front blower bezel is a cosmetic cover for the blowers and is located above the front door.  
To install the front blower bezel, follow these steps:  
1. Open the front door.  
NOTE: The latch is located on the right side of the front door.  
2. Position the bezel over the blower housing frame, hooking the lip of the bezel onto the cross  
support of the blower housing (Figure 3-24 (page 83)).  
82  
Installing the System  
     
Figure 3-24 Installing the Front Blower Bezel  
3. Align the bezel over the nuts that are attached to the bracket at the front of the cabinet.  
4. Using a T-20 driver, tighten the two captive screws on the lower flange of the bezel.  
NOTE: Tighten the screws securely to prevent them from interfering with the door.  
5. Close the front door.  
Wiring Check  
WARNING! LETHAL VOLTAGE HAZARD—Hazardous voltages can be present in the cabinet  
if incorrectly wired into the site AC power supply. Always verify correct wiring and product  
grounding before applying AC power to the cabinet. Failure to do so can result in injury to  
personnel and damage to equipment.  
Verify the following items before applying AC power to the cabinet:  
Cabinet safety ground connects to the site electrical system ground and is not left floating  
or connected to a phase.  
The minimum required method of grounding is to connect the green power cord safety  
ground to the site ground point through the power cord receptacle wiring. HP does not  
recommend cabinet grounding. Treat cabinet grounding as auxiliary or additional grounding  
over and above the ground wire included within the supplied power cord.  
If the product ground is left floating, anyone coming into contact with the cabinet can receive a  
lethal shock if a component fails and causes leakage or direct connection of phase energy to the  
cabinet.  
If the product ground connects to a phase, the server is over 200 volts above ground, presenting  
a lethal shock hazard to anyone coming into contact with the product when site AC power is  
applied to the product.  
Verify the connection of the product ground to site AC power ground through a continuity check  
between the cabinet and site AC power supply ground. Perform the continuity check while the  
site AC power supply circuit breakers serving the cabinet and the cabinet circuit breaker are all  
set to OFF.  
Setting Up the System  
83  
   
To verify that the product ground connects to the site AC power supply ground, follow these  
steps:  
1. Ensure that the site AC power supply circuit breakers serving the cabinet are set to OFF.  
2. Ensure that the cabinet main circuit breaker is set to OFF.  
3. Touch one test probe to the site AC power supply ground source.  
4. Touch the other test probe to an unpainted metal surface of the cabinet.  
NOTE: If the digital multimeter (DMM) leads can not reach from the junction box to the  
cabinet, use a piece of wire connected to the ground terminal of the junction box.  
5. Check for continuity indication of less than 0.1 ohm.  
If continuity is not found, check to ensure that the DMM test leads are making good  
contact to unpainted metal and try again.  
If continuity is still not found, disconnect the cabinet site AC power immediately and  
notify the customer of the probability of incorrectly wired AC power to the cabinet.  
If continuity is good, and connection of the cabinet to site AC power supply ground  
(and not floating or connected to a phase) is verified, then check the voltage.  
NOTE: For dual power sources, proceed to “Checking Voltage” (page 88) with special attention  
to PDCA 0 ground pin to PDCA 1 ground pin voltage. Anything greater than 3 V is cause for  
further investigation.  
Installing and Verifying the PDCA  
All systems are delivered with the appropriate cable plug for options 6 and 7 (Figure 3-25  
Check the voltages at the receptacle prior to plugging in the PDCA plug.  
To verify the proper wiring for a 4-wire PDCA, use a digital voltmeter (DVM) to measure  
the voltage at the receptacle. Voltage must read 200–240 V ac phase-to-phase as measured  
between the receptacle pins as follows: L1 to L2, L2 to L3, L1 to L3 (Figure 3-26 (page 85)).  
To verify the proper wiring for a 5-wire PDCA, use a DVM to measure the voltage at the  
receptacle. Voltage must read 200–240 V ac phase-to-neutral as measured between the  
receptacle pins as follows: L1 to N, L2 to N, L3 to N (Figure 3-27 (page 86)).  
84  
Installing the System  
     
Figure 3-25 PDCA Assembly for Options 6 and 7  
Figure 3-26 A 4-Wire Connector  
Setting Up the System  
85  
   
Figure 3-27 A 5-Wire Connector  
To install the PDCA, follow these steps:  
WARNING! Make sure the circuit breaker on the PDCA is OFF.  
1. Remove the rear PDCA bezel by removing the four retaining screws.  
2. Run the power cord down through the appropriate opening in the floor tile.  
3. Insert the PDCA into its slot (Figure 3-28 (page 86)).  
Figure 3-28 Installing the PDCA  
4. Using a T-20 driver, attach the four screws that hold the PDCA in place.  
5. If required, repeat step 2 through step 4 for the second PDCA.  
86  
Installing the System  
     
6. Reinstall the rear PDCA bezel.  
CAUTION: Do not measure voltages with the PDCA breaker set to ON. Make sure the  
electrical panel breaker is ON and the PDCA breaker is OFF.  
7. Plug in the PDCA connector.  
8. Check the voltage at the PDCA:  
a. Using a T-20 driver, remove the screw on the hinged panel at the top of the PDCA.  
b. Using a voltmeter, measure the test points and compare the values to the ranges given  
in Table 3-3 (page 87) to make sure the voltages conform to the specifications for the  
PDCA and local electrical specifications.  
If the voltage values do not match the specifications, have the customer contact an  
electrician to troubleshoot the problem.  
Figure 3-29 Checking PDCA Test Points (5-Wire)  
Table 3-3 4- and 5-Wire Voltage Ranges  
4-Wire  
5-Wire  
L2 to L3: 200-240 V  
L2 to L1: 200-240 V  
L1 to L3: 200-240 V  
L1 to N: 200-240 V  
L2 to N: 200-240 V  
L3 to N: 200-240 V  
1
N to Ground:  
1
Neutral to ground voltage can vary from millivolts to several volts depending on the distance to the ground/neutral  
bond at the transformer. Any voltage over 3 V must be investigated by a site preparation or power specialist.  
Setting Up the System  
87  
   
Checking Voltage  
The voltage check ensures that all phases (and neutral, for international systems) are wired  
correctly for the cabinet and that the AC input voltage is within specified limits.  
NOTE: If you use a UPS, see applicable UPS documentation for information to connect the  
server and to check the UPS output voltage. UPS User Manual documentation is shipped with  
the UPS and is available at http://docs.hp.com.  
1. Verify that site power is OFF.  
2. Open the site circuit breakers.  
3. Verify that the receptacle ground connector is connected to ground. See Figure 3-30 for  
connector details.  
4. Set the site power circuit breaker to ON.  
Figure 3-30 Wall Receptacle Pinouts  
5. Verify that the voltage between receptacle pins x and y is 200–240 volts ac.  
6. Set the site power circuit breaker to OFF.  
7. Ensure that power is removed from the server.  
8. Route and connect the server power connector to the site power receptacle.  
For locking type receptacles, line up the key on the plug with the groove in the receptacle.  
Push the plug into the receptacle and rotate to lock the connector in place.  
WARNING! Do not set site ac circuit breakers serving the processor cabinets to ON before  
verifying that the cabinet has been wired into the site ac power supply correctly. Failure to  
do so can result in injury to personnel or damage to equipment when ac power is applied  
to the cabinet.  
9. Set the site power circuit breaker to ON.  
WARNING! There is a risk of shock hazard while testing primary power. Use properly  
insulated probes. Be sure to replace the access cover when you finish testing primary power.  
10. Set the server power to ON.  
88  
Installing the System  
   
11. Check that the indicator LED on each power supply is lit. See Figure 3-31.  
Figure 3-31 Power Supply Indicator LED  
Removing the EMI Panels  
Remove the front and back electromagnetic interference (EMI) panels to access ports and to  
visually check whether components are in place and the LEDs are properly illuminated when  
power is applied to the system.  
To remove the front and back EMI panels, follow these steps:  
Setting Up the System  
89  
     
1. Using a T-20 driver, loosen the captive screw at the top center of the front EMI panel  
Figure 3-32 Removing Front EMI Panel Screw  
2. Use the handle provided to remove the EMI panel and set it aside.  
When in position, the EMI panels (front and back) are tightly in place. Removing them takes  
controlled but firm exertion.  
3. Loosen the captive screw at the lower center of the back EMI panel (Figure 3-33 (page 90)).  
Figure 3-33 Removing the Back EMI Panel  
4. Use the handle provided to gently remove the EMI panel and set it aside.  
90  
Installing the System  
   
Connecting the Cables  
The I/O cables are attached and tied inside the cabinet. When the system is installed, these cables  
must be untied, routed, and connected to the cabinets where the other end of the cables terminate.  
Use the following guidelines and Figure 3-34 to route and connect cables. For more information  
on cable routing, see “Routing the I/O Cables” (page 91).  
Each cabinet is identified with a unique color. The cabinet color label is located at the top  
of the cabinet.  
The colored label closest to the cable connector corresponds to the color of the cabinet to which  
it is attached.  
The colored label farther away from the cable connector corresponds to the color of the  
cabinet where the other end of the cable is attached. In Figure 3-34, the dotted lines show  
where the label is located and where the cable terminates.  
Each cable is also labeled with a unique number. This number label is applied on both ends  
of the cable and near the port where the cable is to be connected. In Figure 3-34, the cable  
number labels are indicated by circled numbers, and the cabinet port numbers are indicated  
with boxed numbers.  
Figure 3-34 Cable Labeling  
Routing the I/O Cables  
Routing the cables is a significant task in the installation process. Efficient cable routing is  
important not only for the initial installation, but also to aid in future service calls. The most  
efficient use of space is to route cables so that they are not crossed or tangled. Figure 3-35 (page 92)  
illustrates efficient I/O cable routing.  
Setting Up the System  
91  
         
Figure 3-35 Routing I/O Cables  
To route cables through the cable groomer at the bottom rear of the cabinet, follow these steps:  
1. Remove the cable access plate at the bottom of the groomer.  
2. Beginning at the front of the cabinet, route the cables using the following pattern:  
a. Route the first cable on the left side of the leftmost card cage first. Route it under the  
PCI-X card cage toward the back of the cabinet and down through the first slot at the  
right of the cable groomer.  
b. Route the second cable on the left side of the leftmost card cage to the right of the first  
cable, and so on, until routing all of the cables in the card cage is complete.  
The number and width of cables varies from system to system. Use judgment and the  
customers present and estimated future needs to determine how many cables to route  
through each cable groomer slot.  
c. After routing the leftmost card cage at the front of the cabinet, route the cables in the  
rightmost card cage at the back of the cabinet. Begin with the right cable in the card  
cage and work toward the left.  
d. After routing the cables in the rightmost card cage at the rear of the cabinet, return to  
the front of the system and route the cables in the next card cage to the right.  
e. Repeat steps a through d until all the cables are routed.  
3. Connect the management processor cables last.  
4. Reattach the cable access plate at the bottom of the cable groomer.  
5. Reattach the cable groomer kick plate at the back of the cabinet.  
6. Slip the L bracket under the power cord on the rear of the PDCA.  
7. While holding the L bracket in place, insert the PDCA completely into the cabinet and secure  
the L bracket with one screw.  
92  
Installing the System  
   
Installing the Support Management Station  
The Support Management Station (SMS) ships separately in boxes. The SMS software and 3  
Revisions of Superdome Firmware history are preloaded at the factory.  
NOTE:  
The SMS Shelf may or may not be installed in the factory prior to shipping.  
Installing the SMS Support Shelf  
1. Unpack the SMS rp5700 PC and Support Shelf from their respective shipping containers.  
2. Install the Support Shelf Rack at the U15 position in the 10KG2 Rack and place the SMS PC  
onto the shelf.  
See the following:  
Installing the Support Management Station  
93  
   
Connecting the SMS to the Superdome  
The Superdome Cookbook document is found through the following website (requires  
authentication):  
In the Search the Sales Library: field, enter the keywords: SMS Cookbook. A second window is  
displayed with the file information. Select Worldwide, English (US)to download.  
NOTE: The SMS Cookbook file is presented as a Windows Visio file.  
The SMS software and 3 revisions of Superdome firmware histories are preloaded onto the SMS  
at the factory. If needed, see the following section for the procedures to capture the SMS SW and  
Superdome firmware files.  
SMS Software and Superdome Firmware Downloading Procedure  
Go to the following URL (requires authentication):  
Select the STSD SMS & FW Files link at approximately mid page.  
The Superdome_Binaries.exefile is a self-extracting archive containing the following  
Firmware binaries and SMS Software Utilities for Superdome Servers:  
1. SX1000 – Last three revisions of PA and IA Firmware  
2. SX2000 – Last three revisions of PA and IA Firmware  
3. Legacy – Last three revisions of PA Firmware  
4. SMS Software Utilities:  
— CYGWIN  
— EIT  
— PARCLI  
— SCAN  
Either copy the Superdome_Binaries.exefile to the desktop, or save it to a CD.  
Open the Superdome_Binaries.exefile.  
NOTE: The /optdirectory will be created as the default location.  
SMS Software Utilities  
Move the Software Utilities onto the SMS as indicated:  
SCAN— c:\opt\scansw  
CYGWIN— c:\CYGWIN  
PARCLI— c:\Program Files\Hewlett-Packard\nParCommands  
EIT Tools— c:\Program Files\Hewlett-Packard\EIT  
94  
Installing the System  
   
Superdome Firmware Instructions  
NOTE: Reference to pa or ia denotes two firmware types: one for PARISC Processors (pa) and  
one for Itanium Processors (ia). This is applicable for the sx1000, the sx2000, and the Legacy  
Servers. The Legacy Servers will only have the PARISC Processors (pa) installed.  
PC SMS  
1. Create a c:\opt\firmware\sxX000\X.Xxdirectory.  
Example 3-1 Directory Example  
sx2000\8.7f  
2. Copy the h_ipf_(pa or iA)_sxX000_X.Xx tar.gzfile to the  
c:\opt\firmware\sxX000\X.Xxdirectory.  
3. Open a Cygwinwindow.  
4. Enter the following command to move that bundle into the targeted directory:  
cd c:\opt\firmware\sxX000\X.Xx  
5. Enter the following command to un-compress the gzipfile:  
gunzip h_ipf_(pa or iA)_sxX000_X.Xx.tar.gz  
6. Enter the following command to un-tar the tarfiles:  
tar -xvf h_ipf_(pa or iA)_sxX000_X.Xx.tar  
HP-UX SMS  
1. Create a /opt/firmware/sxX000/X.Xxdirectory.  
Example 3-2 Directory Example  
sx2000/8.7f  
2. Copy the h_ipf_(pa or iA)_sxX000_X.Xx.tar.gzfile to the  
/opt/firmware/sxX000/X.Xxdirectory.  
3. Change the directory to:  
/opt/firmware/sxX000/X.Xx  
4. Enter the following command to un-compress the gzipfile:  
gunzip h_ipf_(pa or ia)_sxX000_X.Xx.tar.gz  
5. Enter the following command to un-tar the tarfile:  
tar -xvf h_ipf_(pa or ia)_sxX000_X.Xx.tar  
Configuring the Event Information Tools  
There are three tools included in the Event Information Tools (EIT) bundle for the SMS. They  
are the Console Logger, the IPMI Log Acquirer and the IPMI Event Viewer. These tools work  
together to collect, interpret, and display system event messages on the SMS.  
Configuring the Event Information Tools  
95  
     
EIT Tools Functionality  
The Console Logger captures the commands typed at the console, the response displayed, and  
alert messages generated by the system. It stores them on the SMS disk drive in a continuous  
log format.  
The IPMI Log Acquirer acquires FPL and FRUID logs from the remote system and stores them  
on the SMS disk drive.  
The IPMI Event Viewer analyzes the FPL logs captured by the IPMI Log Acquirer and displays  
the system event information through either a command-line or Web-based interface.  
Where to Find the EIT Documentation  
The latest documentation for setting up and configuring these tools is available at:  
Once you are at the website, select “Event Information Tools (EIT) - formerly SMS”. You will  
find documentation for each of the following subjects:  
Console Logger  
IPMI Event Viewer  
IPMI Log Acquirer  
Release Notes  
Turning On Housekeeping Power  
To turn on housekeeping power to the system, follow these steps:  
1. Verify that the ac voltage at the input source is within specifications for each cabinet being  
installed.  
2. Ensure the following:  
The ac breakers are in the OFF position.  
The cabinet power switch at the front of the cabinet is in the OFF position.  
The ac breakers and cabinet switches on the I/O expansion cabinet (if present) are in  
the OFF position.  
3. If the complex has an IOX cabinet, power on this cabinet first.  
IMPORTANT: The 48 V switch on the front panel must be OFF.  
4. Turn on the ac breakers on the PDCAs at the back of the each cabinet.  
In a large complex, power on the cabinets in one of the two following orders:  
9, 8, 1, 0  
8, 9, 0, 1  
On the front and back panels, the HKP and the Present LEDs illuminate (Figure 3-36).  
On cabinet 0, the HKP and the Present LEDs illuminate, but only the HKP LED  
illuminates on cabinet 1 (the right cabinet).  
96  
Installing the System  
       
Figure 3-36 Front Panel with HKP and Present LEDs  
Turning On Housekeeping Power  
97  
 
5. Examine the BPS LEDs (Figure 3-37).  
When on, the breakers on the PDCA distribute ac power to the BPSs. Power is present at  
the BPSs when:  
The amber LED next to the AC0 Present label is on (if the breakers on the PDCA are on  
the left side at the back of the cabinet).  
The amber LED next to the AC1 Present label is on (if the breakers on the PDCA are on  
the right side at the back of the cabinet).  
Figure 3-37 BPS LEDs  
Connecting the MP to the Customer LAN  
This section describes how to connect, set up, and verify the management processor (MP) to the  
customer LAN. LAN information includes the MP network name (host name), the MP IP address,  
the subnet mask, and the gateway address. The customer provides this information.  
Connecting the MP to the Network  
NOTE: Based on the customers existing SMS configuration, make the appropriate modifications  
to add in the Superdome/sx2000 SMS LAN configuration.  
Unlike earlier systems, which required the MP to be connected to the private LAN, the sx2000  
system MP now connects to the customers LAN through the appropriate hub, switch, router,  
or other customer-provided LAN device.  
In some cases, the customer can connect the SMS to the MP on the private management LAN.  
In this case, inform the customer that administrators will not be able to access the SMS remotely  
and will have to use the SMS as a “local” device.  
Connect the MP to the customers LAN:  
98  
Installing the System  
         
1. Connect one end of the RJ-45 LAN cable to the LAN port on the MP (Figure 3-38).  
Figure 3-38 MP LAN Connection Location  
2. Connect the other end of the LAN cable to the customer-designated LAN port. Obtain the  
IP address for the MP from the customer.  
Connect the dial-up modem cable between the MP modem and the customers phone line  
connection.  
Setting the Customer IP Address  
NOTE: The default IP address for the customer LAN port on the MP is 192.168.1.1.  
To set the customer LAN IP address, follow these steps:  
Connecting the MP to the Customer LAN  
99  
     
1. From the MP Command Menu prompt MP:CM>, enter lc(LAN configuration).  
The screen displays the default values and asks if you want to modify them.  
TIP: Write down the information, as it may be required for future troubleshooting.  
If you are not already in the Command Menu, enter mato return to the Main Menu, then  
enter cm.  
The LAN configuration screen appears (Figure 3-39).  
Figure 3-39 LAN Configuration Screen  
2. If the LAN software on the MP is working properly, the message LAN status: UP and  
RUNNINGappears. The value in the IP addressfield has been set at the factory.  
NOTE: The customer LAN IP address is designated LAN port 0.  
3. The prompt asks if you want to modify LAN port 0. Enter Y.  
The current customer IP address appears; then the Do you want to modify it? (Y/[N]) prompt  
appears.  
4. Enter Y.  
5. Enter the new IP address.  
6. Confirm the new address.  
7. Enter the MP network name.  
This is the host name for the customer LAN. You can use any name up to 64 characters long.  
It can include alphanumerics, dash (-), under score (_), period (.), or the space character. HP  
recommends that the name be a derivative of the complex name. For example,  
Maggie.com_MP.  
8. Enter the LAN parameters for Subnet mask and Gateway address.  
Obtain this information from the customer.  
9. To display the LAN parameters and status, enter the lscommand at the MP Command  
Menu prompt (MP:CM>).  
The ls command screen appears (Figure 3-40).  
100 Installing the System  
       
Figure 3-40 The ls Command Screen  
To return to the MP Main Menu, enter ma.  
To exit the MP, enter xat the MP Main Menu.  
10. Check the settings for the model string, UUID, and Creator Product Name using the ID  
command. For example:  
MP modifiable stable complex configuration data fields.  
Model String  
Complex System Name  
: 9000/800/SD32B  
: maggie  
Original Product Number: A5201A  
Current Product Number : A9834A  
UUID  
: ffffffff-ffff-ffff-ffff-ffffffffffff  
Creator Manufacturer : hp  
Creator Product Name : superdome server SD32B  
Creator Serial Number : USRxxxxxxxx  
OEM Manufacturer  
OEM Product Name  
OEM Serial Number  
:
:
:USRxxxxxxxx  
11. Set the date and time using the MP command.  
Booting and Verifying the System  
After installing the system, verify that the proper hardware is installed and booted.  
This section describes how to power on the cabinet and boot and test each partition. You must  
open a console window for each partition. You must also open two additional windows, one  
window for initiating reset on partitions and the other for monitoring system partition status.  
Initiate the MP in each window.  
NOTE: The HKP must be ON and the 48 V switch on the front panel must be OFF . To turn on  
Connecting to the MP  
Before powering on the cabinet, you need to open several windows connected to the MP. Then  
switch the 48 V on and boot each partition to the EFI prompt, (the BCH prompt in the case of an  
HP 9000/sx2000 server). To connect to the MP, follow these steps:  
Booting and Verifying the System 101  
         
1. On the SMS, open the following command prompt windows:  
One console window for each partition (MP COoption)  
One for initializing the RScommand from the MP  
One for monitoring partition status (MP VFPoption)  
In each window, connect to the MP by entering the following:  
telnet<MP hostname>  
Or  
telnet <IP address>  
2. Enter the appropriate login and password at the MPprompts (Figure 3-41).  
Figure 3-41 Logging In  
The MP Main Menu appears (Figure 3-42).  
Figure 3-42 Main MP Menu  
3. Repeat steps 1 and 2 for each partition.  
4. In one window bring up the command prompt by entering cmat the MP> prompt (Figure 3-43).  
102 Installing the System  
   
Figure 3-43 MP Command Option  
5. In the another window, open the Virtual Front Panel (VFP) by entering vfpat the MP prompt  
(Figure 3-44). Use this window to observe partition status.  
Figure 3-44 MP Virtual Front Panel  
6. From the VFP menu, enter sto select the whole system, or enter the partition number to  
select a particular partition. An output similar to Figure 3-45 appears. In this example, no  
status is listed because the system 48 V has not been switched on.  
Figure 3-45 Example of Partition State—Cabinet Not Powered Up  
Booting and Verifying the System 103  
         
7. For each of the remaining windows, open the partition console for each partition by enter  
coat the MP> prompt (Figure 3-46). These windows open blank.  
NOTE: If information appears in the windows, it means nothing because the cabinet is  
powered off.  
Figure 3-46 MP Console Option  
Powering On the System 48 V Power Supply  
To power on the system 48 V power supply, follow these steps:  
1. Switch on the 48 V supply from each cabinet front panel.  
IMPORTANT: If the complex has an IOX cabinet, power on this cabinet first.  
In a large complex, power on cabinets in one of the two following orders: 9, 8, 1, 0 or 8, 9, 0,  
1.  
IMPORTANT: The MP must be running in each window.  
As the cabinet boots, observe the partition activity in the window displaying the VFP.  
2. For HP Integrity Superdome/sx2000 systems, follow the procedure in “Booting the HP  
3. For HP 9000/sx2000 systems, follow the procedure in “Booting an HP 9000 sx2000 Server to  
BCH”.  
Booting the HP Integrity Superdome/sx2000 to an EFI Shell  
After powering on or using the CM bocommand, all partition console windows show activity  
while the firmware initializes and stops momentarily at an EFI Boot Manager menu (Figure 3-47).  
104 Installing the System  
           
Figure 3-47 HP Integrity Superdome/sx2000 EFI Boot Manager  
Use the up and down arrow keys on the keyboard to highlight EFI Shell (Built-in)and  
press Enter. Do this for all partitions.  
After you start the EFI Shell, the console window displays the EFI shell prompt (Figure 3-48).  
Figure 3-48 EFI Shell Prompt  
NOTE: If autoboot is enabled for an nPartition, you must interrupt it to stop the boot process  
at the EFI firmware console.  
The VFP indicates that each partition is at system firmware console (Figure 3-49).  
Booting and Verifying the System 105  
       
Figure 3-49 HP Integrity Superdome/sx2000 Partitions at System Firmware Console  
Booting an HP 9000 sx2000 Server to BCH  
After you power on the server or use the MP BOcommand to boot an nPartition past  
boot-is-blocked (BIB), the nPartition console shows activity while the firmware initializes and  
stops at the BCH Main Menu (the Main Menu: Enter command or menu>prompt).  
To redisplay the current menu and its available commands, enter the BCH DIcommand.  
Main Menu: Enter command or menu > di  
---- Main Menu ---------------------------------------------------------------  
Command  
Description  
-------  
-----------  
BOot [PRI|HAA|ALT|<path>]  
PAth [PRI|HAA|ALT] [<path>]  
SEArch [ALL|<cell>|<path>]  
ScRoll [ON|OFF]  
Boot from specified path  
Display or modify a path  
Search for boot devices  
Display or change scrolling capability  
COnfiguration menu  
INformation menu  
SERvice menu  
Displays or sets boot values  
Displays hardware information  
Displays service commands  
DIsplay  
HElp [<menu>|<command>]  
REBOOT  
Redisplay the current menu  
Display help for menu or command  
Restart Partition  
RECONFIGRESET  
Reset to allow Reconfig Complex Profile  
----  
Main Menu: Enter command or menu >  
For information about any of the available BCH commands, enter the HEcommand.  
Verifying the System  
To verify the system, follow these steps:  
106 Installing the System  
     
1. To observe the power status, enter psat the CM> prompt. A status screen similar to the one  
in Figure 3-50 appears.  
Figure 3-50 Power Status First Window  
2. At the Select Device: prompt, enter bthen the cabinet number to check the power status  
of the cabinet. Observe Power Switch: onand Power: enabled(Figure 3-51).  
Figure 3-51 Power Status Window  
Figure 3-51 shows that cells are installed in slots 0 and 4. In the cabinet, verify that cells are  
physically located in slots 0 and 4.  
3. Press <CR> one more time to observe the power status (Figure 3-52).  
Booting and Verifying the System 107  
         
Figure 3-52 Power Status Showing State of UGUY LEDs  
4. Verify that there is an asterisk (*) in the columns marked MP, CLU, and PM.  
IMPORTANT: An asterisk (*) appears in the MP column only for cabinet 0; that is, the  
cabinet containing the MP.  
Verify that there is an asterisk (*) for each of the cells installed in the cabinet by comparing  
what is in the Cells column with the cells located inside the cabinet.  
Running JET Software  
To ensure that the network diagnostic is enabled at the MP prompt, enter the nd command;  
MP:CM>nd. You must run nd to run scan and to do firmware updates to the system.  
The JTAG Utility for Scan Tests (JUST) Exploration Tool (JET) collects system information for  
each system on a network and places it in files for use by other scan tools. JET gathers  
configuration data by executing a series of queries targeted at the MP and the CLU portion of  
the UGUY board.  
IMPORTANT: You must resolve any problems you find as a result of running JET before booting  
the operating system.  
Running JUST  
To run JUST to ensure that the hardware is working properly, follow these steps:  
1. Enter jet_setupat the Windows® SMS command window or enter scan_setupat the  
HP-UX SMS command window.  
2. Enter the complex_name, IP address, and system type.  
3. Enter jet -s <complex_name>.  
4. Enter just -s <complex_name>.  
108 Installing the System  
           
See the JET User Guide, JUST Users Guide, and other related documentation for testing located  
in:  
\opt\scansw\docs\sttdirectory on the Windows® Support Management Station  
/opt/scansw/docs/sttdirectory on the HP-UX Support Management Station  
IMPORTANT: After scan testing successfully completes, reset the complex by cycling the AC  
power.  
Power Cycling After Using JET  
After using JET, you must recycle the system power because the offline diagnostic can deallocate  
the CPUs.  
To remove the 48 V power, run the MP pecommand. Then cycle the ac breakers on the rear of  
the cabinets. For details on power cycling the system, see Appendix C (page 173). Leave power  
off for about 30 seconds to allow the backplane CSRs to reset.  
IMPORTANT: If the complex has any IOX cabinets with IDs 8 or 9, you must power cycle these  
cabinets in the proper sequence.  
Offline Diagnostic Environment  
Now that scan has been run, you can run all the appropriate diagnostics for this system. See the  
offline diagnostic environment (ODE) documentation for instructions.  
Attaching the Rear Kick Plates  
Kick plates protect cables from accidentally being disconnected or damaged and add an attractive  
cosmetic touch to the cabinet. You must attach three metal kick plates to the bottom rear of the  
cabinet.  
To install the kick plates, follow these steps:  
1. Hold the left kick plate in position and attach a clip nut (0590-2318) on the cabinet column  
next to the hole in the flange at the top of the kick plate (Figure 3-53).  
2. Using a screw (0515-0671) and a T-25 driver, attach the flange on the kick plate to the nut  
clip.  
3. Using a T-10 driver and a screw, attach the bottom of the kick plate to the center hole in the  
leveling foot.  
Offline Diagnostic Environment 109  
       
Figure 3-53 Attaching Rear Kick Plates  
4. Perform steps 1–3 on the right kick plate.  
5. Position the upper flange of the center kick plate under the I/O trays complementary  
mounting bracket, to retain the center kick plate top flanges. No top screws are needed on  
the center kick plate. Orient this asymmetrical bracket with the hole located nearest the edge  
in the up position.  
6. Using a T-20 driver, tighten the thumbscrews at the bottom of the center kick plate.  
Performing a Visual Inspection and Completing the Installation  
After booting the system, carefully inspect it and reinstall the EMI panels. To perform a final  
inspection and complete the installation, follow these steps:  
1. Visually inspect the system to verify that all components are in place and secure.  
2. Check that the cables are secured and routed properly.  
3. Check that the cell board ejectors are secure (Figure 3-54).  
If the ejectors are broken or open, the cell board is disconnected.  
110 Installing the System  
       
Figure 3-54 Cell Board Ejectors  
4. Reinstall the front EMI panel (Figure 3-55).  
Figure 3-55 Front EMI Panel Flange and Cabinet Holes  
a. Hook the flange at the lower corners of the EMI panel into the holes on the cabinet.  
b. Position the panel at the top lip, and lift the panel up while pushing the bottom into  
position.  
If needed, compress the EMI gasket to seat the panel properly.  
c. Reattach the screw at the top of the EMI panel.  
5. Check that the cables inside the rear enclosure are secure.  
Performing a Visual Inspection and Completing the Installation 111  
     
6. Reinstall the back EMI panel (Figure 3-56 (page 112)).  
a. Align the lip inside the cabinet with the lip on the EMI panel.  
Figure 3-56 Reinstalling the Back EMI Panel  
b. Push the EMI panel up and in. If needed, compress the EMI gasket at the top of the  
enclosure to get the panel to seat properly.  
c. Reattach the screw at the bottom of the EMI panel.  
Conducting a Post-Installation Check  
After the system is installed in a computer room and verified, conduct the post-installation check.  
Before turning the system over to the customer, inspect the system visually and clean up the  
installation area. Perform the following:  
Inspect circuit boards. Verify that all circuit boards are installed and properly seated and  
that the circuit board retainers are reinstalled.  
Inspect cabling. Ensure that all cables are installed, secured, and properly routed.  
Inspect test points. Verify that test leads are removed from the test points and that the test  
points are properly covered.  
Clean up and dispose of debris. Remove all debris from the area and dispose of it properly.  
Perform final check. Inspect the area to ensure that all parts, tools, and other items used to  
install the system are disposed of properly. Then close and lock the doors.  
Enter information in the Gold Book. When the installation and cleanup are complete, make  
the appropriate notations in the Gold Book shipped with the system.  
Obtain customer acceptance (if required). Be sure to thank the customer for choosing HP.  
112 Installing the System  
       
4 Booting and Shutting Down the Operating System  
This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware  
partition) and procedures for shutting down the OS.  
Operating Systems Supported on Cell-based HP Servers  
HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The  
following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.  
HP 9000 servers have PA-RISC processors and include the following cell-based models  
based on the HP sx2000 chipset:  
HP 9000 Superdome (SD16B, SD32B, and SD64B models)  
HP rp8440  
HP rp7440  
These HP 9000 servers run HP-UX 11i Version 1 (B.11.11). Refer to “Booting and Shutting  
Down HP-UX” (page 118) for details on booting an OS on these servers.  
HP Integrity servers have Intel® Itanium® processors and include the following cell-based  
models based on the HP sx2000 chipset:  
HP Integrity Superdome (SD16B, SD32B, and SD64B models)  
HP rx8640  
HP rx7640  
All HP Integrity servers based on the HP sx2000 chipset run the following OSes:  
HP-UX 11i Version 2 (B.11.23) — Refer to “Booting and Shutting Down HP-UX”  
(page 118) for details.  
Microsoft® Windows® Server 2003 — Refer to “Booting and Shutting Down Microsoft  
Windows” (page 133) for details.  
HP Integrity servers based on the HP sx2000 chipset run the following OSes only in nPartitions  
that have dual-core Intel® Itanium® processors:  
HP OpenVMS I64 8.3 or later — Supported only in nPartitions that have dual-core  
Intel® Itanium® processors. Prior releases of OpenVMS I64 are not supported on servers  
based on the HP sx2000 chipset.  
Red Hat Enterprise Linux 4 Update 4— On servers based on the HP sx2000 chipset, is  
supported only in nPartitions that have dual-core Intel® Itanium® processors. Prior  
releases of Red Hat Enterprise Linux are not supported on servers based on the HP  
sx2000 chipset.  
Red Hat Enterprise Linux 5— On servers based on the HP sx2000 chipset, is supported  
only in nPartitions that have dual-core Intel® Itanium® processors. Prior releases of  
Red Hat Enterprise Linux are not supported on servers based on the HP sx2000 chipset.  
SuSE Linux Enterprise Server 10 — On servers based on the HP sx2000 chipset, is  
supported only in nPartitions that have dual-core Intel® Itanium® processors. Prior  
releases of SuSE Linux Enterprise Server are not supported on servers based on the HP  
sx2000 chipset.  
Operating Systems Supported on Cell-based HP Servers 113  
   
NOTE: SuSE Linux Enterprise Server 10 is supported on HP rx7640 and rx8640 servers,  
and will be supported on other cell-based HP Integrity servers with the Intel® Itanium®  
dual-core processor (Superdome) with SuSE Linux Enterprise Server 10 Service Pack  
1.  
NOTE: On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM)  
parameter, which determines how firmware may interleave memory residing on the cell. The  
supported and recommended CLM setting for the cells in an nPartition depends on the OS  
running in the nPartition. Some OSes support using CLM, and some do not. For details on CLM  
support for the OS you will boot in an nPartition, refer to the booting section for that OS.  
System Boot Configuration Options  
This section briefly discusses the system boot options you can configure on cell-based servers.  
You can configure boot options that are specific to each nPartition in the server complex.  
HP 9000 Boot Configuration Options  
On cell-based HP 9000 servers the configurable system boot options include boot device paths  
(PRI, HAA, and ALT) and the autoboot setting for the nPartition. To set these options from HP-UX,  
use the setbootcommand. From the BCH system boot environment, use the PATHcommand  
at the BCH Main Menu to set boot device paths, and use the PATHFLAGScommand at the BCH  
Configuration menu to set autoboot options. For details, issue HELP command at the appropriate  
BCH menu, where command is the command for which you want help.  
HP Integrity Boot Configuration Options  
On cell-based HP Integrity servers, you must properly specify the ACPI configuration value,  
which affects the OS startup process and on some servers can affect the shutdown behavior. You  
also can configure boot device paths and the autoboot setting for the nPartition. The following  
list describes each configuration option:  
Boot Options List The boot options list is a list of loadable items available for you to select  
from the EFI Boot Manager menu. Ordinarily, the boot options list includes the EFI Shell  
and one or more OS loaders.  
The following example includes boot options for HP OpenVMS, Microsoft Windows, HP-UX,  
and the EFI Shell. The final item in the EFI Boot Manager menu, the Boot Configuration  
menu, is not a boot option. The Boot Configuration menu enables system configuration  
through a maintenance menu.  
EFI Boot Manager ver 1.10 [14.61] Please select a boot option  
HP OpenVMS 8.3  
EFI Shell [Built-in]  
Windows Server 2003, Enterprise  
HP-UX Primary Boot: 4/0/1/1/0.2.0  
Boot Option Maintenance Menu  
Use ^ and v to change option(s). Use Enter to select an option  
114 Booting and Shutting Down the Operating System  
       
NOTE: In some versions of EFI, the Boot Configuration menu is listed as the Boot Option  
Maintenance Menu.  
To manage the boot options list for each system use the EFI Shell, the EFI Boot Configuration  
menu, or OS utilities.  
At the EFI Shell, the bcfgcommand supports listing and managing the boot options list for  
all OSs except Microsoft Windows. On HP Integrity systems with Windows installed the  
\MSUtil\nvrboot.efiutility is provided for managing Windows boot options from the  
EFI Shell. On HP Integrity systems with OpenVMS installed, the \efi\vms\vms_bcfg.efi  
and \efi\vms\vms_showutilities are provided for managing OpenVMS boot options.  
The EFI Boot Configuration menu provides the Add a Boot Option, Delete Boot Option(s),  
and Change Boot Order menu items. (If you must add an EFI Shell entry to the boot options  
list, use this method.)  
To save and restore boot options, use the EFI Shell variablecommand. The variable  
-save file command saves the contents of the boot options list to the specified file on an  
EFI disk partition. The variable -restore file command restores the boot options list  
from the specified file that was previously saved. Details also are available by entering help  
variableat the EFI Shell.  
OS utilities for managing the boot options list include the HP-UX setbootcommand and  
the HP OpenVMS @SYS$MANAGER:BOOT_OPTIONS.COMcommand.  
The OpenVMS I64 installation and upgrade procedures assist you in setting up and validating  
a boot option for your system disk. HP recommends that you allow the procedure to do  
this. Alternatively, you can use the @SYS$MANAGER:BOOT_OPTIONS.COMcommand (also  
referred to as the OpenVMS I64 Boot Manager utility) to manage boot options for your  
system disk. The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility is a menu-based  
utility and is easier to use than EFI. To configure OpenVMS I64 booting on Fibre Channel  
devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For  
more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity  
Servers Upgrade and Installation Manual.  
For details, refer to the following sections.  
To set HP-UX boot options refer to Adding HP-UX to the Boot Options List” (page 118).  
To set OpenVMS boot options refer to Adding HP OpenVMS to the Boot Options List”  
To set Windows boot options refer to Adding Microsoft Windows to the Boot Options  
To set Linux boot options refer to Adding Linux to the Boot Options List” (page 139).  
Hyper-Threading nPartitions that have dual-core Intel® Itanium® processors can support  
Hyper-Threading. Hyper-Threading provides the ability for processors to create a second  
virtual core that allows additional efficiencies of processing. For example, a dual-core  
processor with Hyper-Threading active can simultaneously run four threads.  
The EFI Shell cpuconfigcommand can enable and disable Hyper-Threading for an  
nPartition whose processors support it. Recent releases of the nPartition Commands and  
Partition Manager also support Hyper-Threading.  
System Boot Configuration Options 115  
Details of the cpuconfigcommand are given below and are available by entering help  
cpuconfigat the EFI Shell.  
cpuconfig threads— Reports Hyper-Threading status for the nPartition  
cpuconfig threads on— Enables Hyper-Threading for the nPartition. After enabling  
Hyper-Threading the nPartition must be reset for Hyper-Threading to be active.  
cpuconfig threads off— Disables Hyper-Threading for the nPartition. After  
disabling Hyper-Threading the nPartition must be reset for Hyper-Threading to be  
inactive  
After enabling or disabling Hyper-Threading, the nPartition must be reset for the  
Hyper-Threading change to take effect. Use the EFI Shell resetcommand.  
Enabled means that Hyper-Threading will be active on the next reboot of the nPartition.  
Active means that each processor core in the nPartition has a second virtual core that enables  
simultaneously running multiple threads.  
Autoboot Setting You can configure the autoboot setting for each nPartition either by using  
the autobootcommand at the EFI Shell, or by using the Set Auto Boot TimeOut menu item  
at the EFI Boot Option Maintenance menu.  
To set autoboot from HP-UX, use the setbootcommand.  
ACPI Configuration Value—HP Integrity Server OS Boot On cell-based HP Integrity servers  
you must set the proper ACPI configuration for the OS that will be booted on the nPartition.  
To check the ACPI configuration value, issue the acpiconfigcommand with no arguments  
at the EFI Shell.  
To set the ACPI configuration value, issue the acpiconfig value command at the EFI Shell,  
where value is either defaultor windows. Then reset the nPartition by issuing the reset  
EFI Shell command for the setting to take effect.  
The ACPI configuration settings for the supported OSes are in the following list.  
HP-UX ACPI Configuration: default On cell-based HP Integrity servers, to boot or install  
the HP-UX OS, you must set the ACPI configuration value for the nPartition to default.  
HP OpenVMS I64 ACPI Configuration: default On cell-based HP Integrity servers, to  
boot or install the HP OpenVMS I64 OS, you must set the ACPI configuration value for  
the nPartition to default.  
Windows ACPI Configuration: windows On cell-based HP Integrity servers, to boot or  
install the Windows OS, you must set the ACPI configuration value for the nPartition  
to windows.  
116 Booting and Shutting Down the Operating System  
Red Hat Enterprise Linux ACPI Configuration: default On cell-based HP Integrity servers,  
to boot or install the Red Hat Enterprise Linux OS, you must set the ACPI configuration  
value for the nPartition to default.  
SuSE Linux Enterprise Server ACPI Configuration: default On cell-based HP Integrity  
servers, to boot or install the SuSE Linux Enterprise Server OS, you must set the ACPI  
configuration value for the nPartition to default.  
Boot Modes on HP Integrity nPartitions: nPars and vPars Modes On cell-based HP Integrity  
servers, each nPartition can be configured in either of two boot modes:  
nParsBoot Mode  
In nParsboot mode, an nPartition is configured to boot any single operating system  
in the standard environment. When an nPartition is in nParsboot mode, it cannot boot  
the vPars monitor and therefore does not support HP-UX virtual partitions.  
vParsBoot Mode  
In vParsboot mode, an nPartition is configured to boot into the vPars environment.  
When an nPartition is in vParsboot mode, it can only boot the vPars monitor and  
therefore it only supports HP-UX virtual partitions and it does not support booting HP  
OpenVMS I64, Microsoft Windows, or other operating systems. On an nPartition in  
vParsboot mode, HP-UX can boot only within a virtual partition (from the vPars  
monitor) and cannot boot as a standalone, single operating system in the nPartition.  
CAUTION: An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions  
when in nParsboot mode. Likewise, an nPartition on an HP Integrity server cannot boot  
an operating system outside of a virtual partition when in vParsboot mode.  
To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use  
any of the following tools as appropriate. Refer to Installing and Managing HP-UX Virtual  
Partitions (vPars), Sixth Edition, for details, examples, and restrictions.  
parconfigEFI shell command  
The parconfigcommand is a built-in EFI shell command. Refer to the help  
parconfigcommand for details.  
\EFI\HPUX\vparconfigEFI shell command  
The vparconfigcommand is delivered in the \EFI\HPUXdirectory on the EFI system  
partition of the disk where HP-UX virtual partitions has been installed on a cell-based  
HP Integrity server. For usage details, enter the vparconfigcommand with no options.  
vparenvHP-UX command  
On cell-based HP Integrity servers only, the vparenvHP-UX command is installed on  
HP-UX 11iv2 (B.11.23) systems that have the HP-UX virtual partitions software. Refer  
to vparenv(1m) for details.  
System Boot Configuration Options 117  
NOTE: On HP Integrity servers, nPartitions that do not have the parconfigEFI shell  
command do not support virtual partitions and are effectively in nPars boot mode.  
HP recommends that you do not use the parconfigEFI shell command and instead use  
the \EFI\HPUX\vparconfigEFI shell command to manage the boot mode for nPartitions  
on cell-based HP Integrity servers.  
Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details.  
Booting and Shutting Down HP-UX  
This section presents procedures for booting and shutting down HP-UX on cell-based HP servers  
and a procedure for adding HP-UX to the boot options list on HP Integrity servers.  
To determine whether the cell local memory (CLM) configuration is appropriate for HP-UX,  
To add an HP-UX entry to the nPartition boot options list on an HP Integrity server, refer  
To boot HP-UX, refer to “Booting HP-UX” (page 119).  
To shut down HP-UX, refer to “Shutting Down HP-UX” (page 127).  
HP-UX Support for Cell Local Memory  
On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter,  
which determines how firmware interleaves memory residing on the cell.  
IMPORTANT: HP-UX 11i Version 2 (B.11.23) supports using CLM. The optimal CLM settings  
for HP-UX B.11.23 depend on the applications and workload the OS is running.  
To check CLM configuration details from an OS, use Partition Manager or the parstatus  
command. For example, the parstatus -V -c# command and parstatus -V -p# command  
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where  
# is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For  
details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://  
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use  
the info memcommand. If the amount of noninterleaved memory reported is less than 512 MB,  
then no CLM is configured for any cells in the nPartition (and the indicated amount of  
noninterleaved memory is used by system firmware). If the info memcommand reports more  
than 512 MB of noninterleaved memory, then use Partition Manager or the parstatuscommand  
to confirm the CLM configuration details.  
To set the CLM configuration, use Partition Manager or the parmodifycommand. For details,  
refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/  
Adding HP-UX to the Boot Options List  
This section describes how to add an HP-UX entry to the system boot options list.  
You can add the \EFI\HPUX\HPUX.EFIloader to the boot options list from the EFI Shell or EFI  
Boot Configuration menu (or in some versions of EFI, the Boot Option Maintenance Menu).  
See “Boot Options List” (page 114) for additional information about saving, restoring, and creating  
boot options.  
118 Booting and Shutting Down the Operating System  
     
NOTE: On HP Integrity servers, the OS installer automatically adds an entry to the boot options  
list.  
Procedure 4-1 Adding an HP-UX Boot Option  
This procedure adds an HP-UX item to the boot options list from the EFI Shell.  
To add an HP-UX boot option when logged in to HP-UX, use the setbootcommand. For details,  
refer to the setboot(1M) manpage.  
1. Access the EFI Shell environment.  
Log in to the management processor, and enter COto access the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:  
where X is the file system number) .  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
The full path for the HP-UX loader is \EFI\HPUX\HPUX.EFI, and it should be on the device  
you are accessing.  
3. At the EFI Shell environment, use the bcfgcommand to manage the boot options list.  
The bcfgcommand includes the following options for managing the boot options list:  
bcfg boot dump— Display all items in the boot options list for the system.  
bcfg boot rm # — Remove the item number specified by # from the boot options  
list.  
bcfg boot mv #a #b — Move the item number specified by #a to the position specified  
by #b in the boot options list.  
bcfg boot add # file.efi "Description"— Add a new boot option to the position in  
the boot options list specified by #. The new boot option references file.efi and is listed  
with the title specified by Description.  
For example, bcfg boot add 1 \EFI\HPUX\HPUX.EFI "HP-UX 11i"adds an  
HP-UX 11i item as the first entry in the boot options list.  
Refer to the help bcfgcommand for details.  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment press ^B (Control+B); this exits the system console and returns  
to the management processor Main Menu. To exit the management processor, enter Xat the  
Main Menu.  
Booting HP-UX  
This section describes the following methods of booting HP-UX:  
“Standard HP-UX Booting” (page 120) — The standard ways to boot HP-UX. Typically, this  
results in booting HP-UX in multiuser mode.  
“Single-User Mode HP-UX Booting” (page 123) — How to boot HP-UX in single-user mode.  
LVM-maintenance mode.  
Booting and Shutting Down HP-UX 119  
 
Refer to “Shutting Down HP-UX” (page 127) for details on shutting down the HP-UX OS.  
CAUTION:  
ACPI Configuration for HP-UX Must Be default On cell-based HP Integrity servers, to boot the  
HP-UX OS, an nPartition ACPI configuration value must be set to default.  
At the EFI Shell interface, enter the acpiconfigcommand with no arguments to list the current  
ACPI configuration. If the acpiconfigvalue is not set to default, then HP-UX cannot boot.  
In this situation you must reconfigure acpiconfig; otherwise, booting will be interrupted with  
a panic when the HP-UX kernel is launched.  
To set the ACPI configuration for HP-UX:  
1. At the EFI Shell interface, enter the acpiconfig defaultcommand.  
2. Enter the resetcommand for the nPartition to reboot with the proper (default)  
configuration for HP-UX.  
Standard HP-UX Booting  
This section describes how to boot HP-UX on cell-based HP 9000 servers and cell-based HP  
Integrity servers.  
On HP 9000 servers, to boot HP-UX refer to “HP-UX Booting (BCH Menu)” (page 120).  
On HP Integrity servers, to boot HP-UX use either of the following procedures:  
Procedure 4-2 HP-UX Booting (BCH Menu)  
From the BCH Menu, use the BOOTcommand to boot the HP-UX OS. The BCH Menu is available  
only on HP 9000 servers.  
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console. When accessing the console, confirm that you are at the BCH Main Menu  
(the Main Menu: Enter command or menu>prompt). If you are at a BCH menu other  
than the Main Menu, then enter MAto return to the BCH Main Menu.  
2. Choose which device to boot.  
From the BCH Main Menu, use the PATHcommand to list any boot path variable settings.  
The primary (PRI) boot path normally is set to the main boot device for the nPartition. You  
also can use the SEARCHcommand to find and list potentially bootable devices for the  
nPartition.  
Main Menu: Enter command or menu > PATH  
Primary Boot Path: 0/0/2/0/0.13  
0/0/2/0/0.d  
(hex)  
(hex)  
(hex)  
HA Alternate Boot Path: 0/0/2/0/0.14  
0/0/2/0/0.e  
Alternate Boot Path: 0/0/2/0/0.0  
0/0/2/0/0.0  
Main Menu: Enter command or menu >  
3. Boot the device by using the BOOTcommand from the BCH interface.  
You can issue the BOOTcommand in any of the following ways:  
120 Booting and Shutting Down the Operating System  
     
BOOT  
Issuing the BOOTcommand with no arguments boots the device at the primary (PRI)  
boot path.  
BOOT bootvariable  
This command boots the device indicated by the specified boot path, where bootvariable  
is the PRI, HAA, or ALT boot path.  
For example, BOOT PRIboots the primary boot path.  
BOOT LAN INSTALLor BOOT LAN.ip-address INSTALL  
The BOOT... INSTALLcommands boot HP-UX from the default HP-UX install server  
or from the server specified by ip-address.  
BOOT path  
This command boots the device at the specified path. You can specify the path in HP-UX  
hardware path notation (for example, 0/0/2/0/0.13) or in path label format (for example,  
P0 or P1) .  
If you specify the path in path label format, then path refers to a device path reported  
by the last SEARCHcommand.  
After you issue the BOOTcommand, the BCH interface prompts you to specify whether you  
want to stop at the ISL prompt.  
To boot the /stand/vmunixHP-UX kernel from the device without stopping at the ISL  
prompt, enter nto automatically proceed past ISL and execute the contents of the AUTOfile  
on the chosen device. (By default the AUTOfile is configured to load /stand/vmunix).  
Main Menu: Enter command or menu > BOOT PRI  
Primary Boot Path: 0/0/1/0/0.15  
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> n  
ISL booting hpux  
Boot  
: disk(0/0/1/0/0.15.0.0.0.0.0;0)/stand/vmunix  
To boot an HP-UX kernel other than /stand/vmunix, or to boot HP-UX in single-user or  
LVM-maintenance mode, stop at the ISL prompt and specify the appropriate arguments to  
the hpuxloader.  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Procedure 4-3 HP-UX Booting (EFI Boot Manager)  
From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using  
that boot option. The EFI Boot Manager is available only on HP Integrity servers.  
details.  
Booting and Shutting Down HP-UX 121  
 
1. Access the EFI Boot Manager menu for the nPartition on which you want to boot HP-UX.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
2. At the EFI Boot Manager menu, select an item from the boot options list.  
Each item in the boot options list references a specific boot device and provides a specific  
set of boot options or arguments to be used when booting the device.  
3. Press Enter to initiate booting using the chosen boot option.  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Procedure 4-4 HP-UX Booting (EFI Shell)  
From the EFI Shell environment, to boot HP-UX on a device first access the EFI System Partition  
for the root device (for example fs0:) and then enter HPUXto initiate the loader. The EFI Shell  
is available only on HP Integrity servers.  
details.  
1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. At the EFI Shell environment, issue the acpiconfigcommand to list the current ACPI  
configuration for the local nPartition.  
On cell-based HP Integrity servers, to boot the HP-UX OS, an nPartition ACPI configuration  
value must be set to default. If the acpiconfigvalue is not set to default, then HP-UX  
cannot boot; in this situation you must reconfigure acpiconfigor booting will be  
interrupted with a panic when launching the HP-UX kernel.  
To set the ACPI configuration for HP-UX:  
a. At the EFI Shell interface enter the acpiconfig defaultcommand.  
b. Enter the resetcommand for the nPartition to reboot with the proper (default)  
configuration for HP-UX.  
3. At the EFI Shell environment, issue the mapcommand to list all currently mapped bootable  
devices.  
The bootable file systems of interest typically are listed as fs0:, fs1:, and so on.  
4. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:  
where X is the file system number).  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
122 Booting and Shutting Down the Operating System  
 
The file system number can change each time it is mapped (for example, when the nPartition  
boots, or when the map -rcommand is issued).  
5. When accessing the EFI System Partition for the desired boot device, issue the HPUXcommand  
to initiate the HPUX.EFIloader on the device you are accessing.  
The full path for the loader is \EFI\HPUX\HPUX.EFI. When initiated, HPUX.EFIreferences  
the \EFI\HPUX\AUTOfile and boots HP-UX using the default boot behavior specified in  
the AUTOfile.  
You are given 10 seconds to interrupt the automatic booting of the default boot behavior.  
Pressing any key during this 10-second period stops the HP-UX boot process and enables  
you to interact with the HPUX.EFIloader. To exit the loader (the HPUX>prompt), enter  
exit(this returns you to the EFI Shell).  
To boot the HP-UX OS, do not type anything during the 10-second period given for stopping  
at the HPUX.EFIloader.  
Shell> map  
Device mapping table  
fs0 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part1,Sig72550000)  
blk0 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)  
blk1 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part1,Sig72550000)  
blk2 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part2,Sig72550000)  
blk3 : Acpi(000222F0,2A8)/Pci(0|0)/Scsi(Pun8,Lun0)  
blk4 : Acpi(000222F0,2A8)/Pci(0|1)/Scsi(Pun2,Lun0)  
Shell> fs0:  
fs0:\> hpux  
(c) Copyright 1990-2002, Hewlett Packard Company.  
All rights reserved  
HP-UX Boot Loader for IA64 Revision 1.723  
Press Any Key to interrupt Autoboot  
\efi\hpux\AUTO ==> boot vmunix  
Seconds left till autoboot - 9  
6. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Single-User Mode HP-UX Booting  
This section describes how to boot HP-UX in single-user mode on cell-based HP 9000 servers  
and cell-based HP Integrity servers.  
On HP 9000 servers, to boot HP-UX in single-user mode, refer to “Single-User Mode HP-UX  
On HP Integrity servers, to boot HP-UX in single-user mode, refer to “Single-User Mode  
Procedure 4-5 Single-User Mode HP-UX Booting (BCH Menu)  
From the BCH Menu, you can boot HP-UX in single-user mode by issuing the BOOTcommand,  
stopping at the ISL interface, and issuing hpuxloader options. The BCH Menu is available only  
on HP 9000 servers.  
Booting and Shutting Down HP-UX 123  
   
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in  
single-user mode.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console. When accessing the console, confirm that you are at the BCH Main Menu  
(the Main Menu: Enter command or menu>prompt). If you are at a BCH menu other  
than the Main Menu, then enter MAto return to the BCH Main Menu.  
2. Boot the desired device by using the BOOTcommand at the BCH interface, and specify that  
the nPartition stop at the ISL prompt prior to booting (reply yto the “stop at the ISL prompt”  
question).  
Main Menu: Enter command or menu > BOOT 0/0/2/0/0.13  
BCH Directed Boot Path: 0/0/2/0/0.13  
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> y  
Initializing boot Device.  
....  
ISL Revision A.00.42 JUN 19, 1999  
ISL>  
3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to  
boot the HP-UX kernel in the desired mode.  
Use the hpuxloader to specify the boot mode options and to specify which kernel to boot  
on the nPartition (for example, /stand/vmunix).  
To boot HP-UX in single-user mode:  
ISL> hpux -is boot /stand/vmunix  
Example 4-1 (page 124) shows output from this command.  
To boot HP-UX at the default run level:  
ISL> hpux boot /stand/vmunix  
To exit the ISL prompt and return to the BCH interface, issue the EXITcommand instead  
of specifying one of the hpuxloader commands.  
Refer to the hpux(1M) manpage for a detailed list of hpuxloader options.  
Example 4-1 Single-User HP-UX Boot  
ISL Revision A.00.42 JUN 19, 1999  
ISL> hpux -is /stand/vmunix  
Boot  
: disk(0/0/2/0/0.13.0.0.0.0.0;0)/stand/vmunix  
8241152 + 1736704 + 1402336 start 0x21a0e8  
....  
INIT: Overriding default level with level s’  
INIT: SINGLE USER MODE  
INIT: Running /sbin/sh  
#
124 Booting and Shutting Down the Operating System  
 
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Procedure 4-6 Single-User Mode HP-UX Booting (EFI Shell)  
From the EFI Shell environment, boot in single-user mode by stopping the boot process at the  
HPUX.EFIinterface (the HP-UX Boot Loader prompt, HPUX>) by entering the boot -is vmunix  
command. The EFI Shell is available only on HP Integrity servers.  
details.  
1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX in  
single-user mode.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:  
where X is the file system number).  
3. When accessing the EFI System Partition for the desired boot device, issue the HPUXcommand  
to initiate the \EFI\HPUX\HPUX.EFIloader on the device you are accessing.  
4. Boot to the HP-UX Boot Loader prompt (HPUX>) by pressing any key within the 10 seconds  
given for interrupting the HP-UX boot process. You will use the HPUX.EFIloader to boot  
HP-UX in single-user mode in the next step.  
After you press any key, the HPUX.EFIinterface (the HP-UX Boot Loader prompt, HPUX>)  
is provided. For help using the HPUX.EFIloader, enter the helpcommand. To return to  
the EFI Shell, enter exit.  
fs0:\> hpux  
(c) Copyright 1990-2002, Hewlett Packard Company.  
All rights reserved  
HP-UX Boot Loader for IA64 Revision 1.723  
Press Any Key to interrupt Autoboot  
\efi\hpux\AUTO ==> boot vmunix  
Seconds left till autoboot -  
9
[User Types a Key to Stop the HP-UX Boot Process and Access the HPUX.EFI Loader ]  
Type helpfor help  
HPUX>  
5. At the HPUX.EFIinterface (the HP-UX Boot Loader prompt, HPUX>), enter the boot -is  
vmunixcommand to boot HP-UX (the /stand/vmunixkernel) in single-user (-is) mode.  
HPUX> boot -is vmunix  
> System Memory = 4063 MB  
loading section 0  
................................................... (complete)  
loading section 1  
........ (complete)  
loading symbol table  
loading System Directory(boot.sys) to MFS  
....  
Booting and Shutting Down HP-UX 125  
 
loading MFSFILES Directory(bootfs) to MFS  
......  
Launching /stand/vmunix  
SIZE: Text:25953K + Data:3715K + BSS:3637K = Total:33306K  
Console is on a Serial Device  
Booting kernel...  
6. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
LVM-Maintenance Mode HP-UX Booting  
This section describes how to boot HP-UX in LVM-maintenance mode on cell-based HP 9000  
servers and cell-based HP Integrity servers.  
On HP 9000 servers, to boot HP-UX in LVM-maintenance mode, refer to LVM-Maintenance  
On HP Integrity servers, to boot HP-UX in LVM-maintenance mode, refer to  
Procedure 4-7 LVM-Maintenance Mode HP-UX Booting (BCH Menu)  
From the BCH Menu, you can boot HP-UX in LVM-maintenance mode by issuing the BOOT  
command, stopping at the ISL interface, and issuing hpuxloader options. The BCH Menu is  
available only on HP 9000 servers.  
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in  
LVM-maintenance mode.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console. When accessing the console, confirm that you are at the BCH Main Menu  
(the Main Menu: Enter command or menu>prompt). If you are at a BCH menu other  
than the Main Menu, then enter MAto return to the BCH Main Menu.  
2. Boot the desired device by using the BOOTcommand at the BCH interface, and specify that  
the nPartition stop at the ISL prompt prior to booting (reply yto the “stop at the ISL prompt”  
question).  
3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to  
boot the HP-UX kernel in the desired mode.  
To boot HP-UX in LVM-maintenance mode:  
ISL> hpux -lm boot /stand/vmunix  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Procedure 4-8 LVM-Maintenance Mode HP-UX Booting (EFI Shell)  
From the EFI Shell environment, boot in LVM-maintenance mode by stopping the boot process  
at the HPUX.EFIinterface (the HP-UX Boot Loader prompt, HPUX>) by entering the boot -lm  
vmunixcommand. The EFI Shell is available only on HP Integrity servers.  
details.  
126 Booting and Shutting Down the Operating System  
     
1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX in  
LVM-maintenance mode.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:  
where X is the file system number).  
3. When accessing the EFI System Partition for the desired boot device, issue the HPUXcommand  
to initiate the \EFI\HPUX\HPUX.EFIloader on the device you are accessing.  
4. Type any key within the 10 seconds given for interrupting the HP-UX boot process. This  
stops the boot process at the HPUX.EFIinterface (the HP-UX Boot Loader prompt, HPUX>).  
5. At the HPUX.EFIinterface, enter the boot -lm vmunixcommand to boot HP-UX (the  
/stand/vmunixkernel) in LVM-maintenance (-lm) mode.  
6. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
Shutting Down HP-UX  
When HP-UX is running on an nPartition, you can shut down HP-UX using the shutdown  
command.  
On nPartitions you have the following options when shutting down HP-UX:  
To shut down HP-UX and reboot an nPartition: shutdown -r  
On cell-based HP Integrity servers, the shutdown -rcommand is equivalent to the  
shutdown -Rcommand.  
To shut down HP-UX and halt an nPartition: shutdown -h  
On cell-based HP Integrity servers, the shutdown -hcommand is equivalent to the  
shutdown -R -Hcommand.  
To perform a reboot for reconfiguration of an nPartition: shutdown -R  
To hold an nPartition at a shutdown for reconfiguration state: shutdown -R -H  
For details, refer to the shutdown(1M) manpage.  
Procedure 4-9 Shutting Down HP-UX  
From the HP-UX command line, issue the shutdowncommand to shut down the HP-UX OS.  
1. Log in to HP-UX running on the nPartition that you want to shut down.  
Log in to the management processor for the server and use the Console menu to access the  
system console. Accessing the console through the MP enables you to maintain console  
access to the system after HP-UX has shut down.  
Booting and Shutting Down HP-UX 127  
 
2. Issue the shutdowncommand with the appropriate command-line options.  
The command-line options you specify dictate the way in which HP-UX is shut down,  
whether the nPartition is rebooted, and whether any nPartition configuration changes take  
place (for example, adding or removing cells).  
Use the following list to choose an HP-UX shutdown option for your nPartition:  
Shut down HP-UX and halt the nPartition.  
On cell-based HP Integrity servers, the shutdown -hcommand puts an nPartition  
into the shutdown for reconfiguration state; for details, refer to the discussion of  
shutdown -R -Hin this list.  
Shut down HP-UX and reboot the nPartition.  
Issue the shutdown -rcommand to shut down and reboot the nPartition.  
On cell-based HP Integrity servers, the shutdown -rcommand is equivalent to the  
shutdown -Rcommand.  
Perform a reboot for reconfiguration of the nPartition.  
Issue the HP-UX shutdown -Rcommand to perform a reboot for reconfiguration.  
This shuts down HP-UX, reconfigures the nPartition if needed, and reboots the  
nPartition.  
Reboot the nPartition and put it into the shutdown for reconfiguration state.  
Use the HP-UX shutdown -R -Hcommand to hold the nPartition in the shutdown  
for reconfiguration state.  
NOTE: On Superdome SX2000 PA, Shutdown R Hdoes not stop at BIB if the MP  
has been hot swapped since the last reboot.  
NOTE: On HP Integrity servers you should reset an nPartition only after all self tests  
and partition rendezvous have completed. For example, when the nPartition is inactive  
(all cells are at BIB) or is at EFI.  
This leaves the nPartition and all its cells in an inactive state (the nPartition can be  
reconfigured remotely).  
To reboot the nPartition, you must do so manually by using the BOcommand at the  
management processor Command Menu.  
If HP-UX is halted on the nPartition, thus not allowing you to use the shutdowncommand,  
you can reboot or reset the nPartition by issuing commands from the management processor  
Command Menu.  
Booting and Shutting Down HP OpenVMS I64  
This section presents procedures for booting and shutting down HP OpenVMS I64 on cell-based  
HP Integrity servers and procedures for adding HP OpenVMS to the boot options list.  
To determine whether the cell local memory (CLM) configuration is appropriate for HP  
To add an HP OpenVMS entry to the boot options list, refer to Adding HP OpenVMS to  
To boot HP OpenVMS on a cell-based HP Integrity server, refer to “Booting HP OpenVMS”  
To shut down HP OpenVMS, refer to “Shutting Down HP OpenVMS” (page 132).  
128 Booting and Shutting Down the Operating System  
 
HP OpenVMS I64 Support for Cell Local Memory  
On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter,  
which determines how firmware interleaves memory residing on the cell.  
IMPORTANT: HP OpenVMS I64 does not support using CLM. Before booting OpenVMS on an  
nPartition, you must ensure that the CLM parameter for each cell in the nPartition is set to zero  
(0). Although you might be able to boot OpenVMS on an nPartition with CLM configured, any  
memory configured as cell local is unusable, and such a configuration is untested and  
unsupported.  
To check CLM configuration details from an OS, use Partition Manager or the parstatus  
command. For example, the parstatus -V -c# command and parstatus -V -p# command  
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where  
# is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For  
details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://  
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use  
the info memcommand. If the amount of noninterleaved memory reported is less than 512 MB,  
then no CLM is configured for any cells in the nPartition (and the indicated amount of  
noninterleaved memory is used by system firmware). If the info memcommand reports more  
than 512 MB of noninterleaved memory, then use Partition Manager or the parstatuscommand  
to confirm the CLM configuration details.  
To set the CLM configuration, use Partition Manager or the parmodifycommand. For details,  
refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/  
Adding HP OpenVMS to the Boot Options List  
On HP Integrity servers, you can use the following procedures to manage boot options list entries  
for HP OpenVMS.  
You can add the \efi\vms\vms_loader.efiloader to the boot options list from the EFI Shell  
or EFI Boot Configuration menu (or in some versions of EFI, the Boot Option Maintenance Menu).  
See “Boot Options List” (page 114) for additional information about saving, restoring, and creating  
boot options.  
NOTE: OpenVMS I64 installation and upgrade procedures assist you in setting up and validating  
a boot option for your system disk. HP recommends that you allow the procedure to do this.  
To configure booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager  
utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer  
to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual.  
Procedure 4-10 Adding an HP OpenVMS Boot Option  
This procedure adds an HP OpenVMS item to the boot options list from the EFI Shell.  
To add an HP OpenVMS boot option when logged in to OpenVMS, use the  
@SYS$MANAGER:BOOT_OPTIONS.COMcommand.  
Booting and Shutting Down HP OpenVMS I64 129  
   
1. Access the EFI Shell environment.  
Log in to the management processor, and enter COto access the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot HP OpenVMS  
(fsX:, where X is the file system number).  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
The full path for the HP OpenVMS loader is \efi\vms\vms_loader.efi, and it should  
be on the device you are accessing.  
3. At the EFI Shell environment, use the bcfgcommand to manage the boot options list.  
You can also accomplish this step by using the \efi\vms\vms_bcfg.efiand  
\efi\vms\vms_show.efiutilities, which are available on the EFI System Partition for  
HP OpenVMS. Both vms_bcfgand vms_showare unique utilities for OpenVMS I64. The  
vms_bcfgutility differs from the bcfgEFI command in that vms_bcfgenables you to  
specify boot devices using device names consistent with OpenVMS naming conventions.  
The bcfgcommand includes the following options for managing the boot options list:  
bcfg boot dump— Display all items in the boot options list for the system.  
bcfg boot rm # — Remove the item number specified by # from the boot options  
list.  
bcfg boot mv #a #b — Move the item number specified by #a to the position specified  
by #b in the boot options list.  
bcfg boot add # file.efi "Description"— Add a new boot option to the position in  
the boot options list specified by #. The new boot option references file.efi and is listed  
with the title specified by Description.  
For example, bcfg boot add 1 \efi\vms\vms_loader.efi "HP OpenVMS"adds  
an HP OpenVMS item as the first entry in the boot options list.  
Refer to the help bcfgcommand for details.  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and  
returns to the management processor Main Menu. To exit the management processor, enter  
Xat the Main Menu.  
130 Booting and Shutting Down the Operating System  
Booting HP OpenVMS  
To boot HP OpenVMS I64 on a cell-based HP Integrity server use either of the following  
procedures.  
CAUTION:  
ACPI Configuration for HP OpenVMS I64 Must Be default On cell-based HP Integrity servers,  
to boot the HP OpenVMS OS, an nPartition ACPI configuration value must be set to default.  
At the EFI Shell interface, enter the acpiconfigcommand with no arguments to list the current  
ACPI configuration. If the acpiconfigvalue is not set to default, then OpenVMS cannot  
boot. In this situation, you must reconfigure acpiconfig; otherwise, booting will fail and report  
the INCONSTATEcode when OpenVMS is launched.  
To set the ACPI configuration for HP OpenVMS I64:  
1. At the EFI Shell interface enter the acpiconfig defaultcommand.  
2. Enter the resetcommand for the nPartition to reboot with the proper (default)  
configuration for OpenVMS.  
Procedure 4-11 Booting HP OpenVMS (EFI Boot Manager)  
From the EFI Boot Manager menu, select an item from the boot options list to boot HP OpenVMS  
using the selected boot option.  
1. Access the EFI Boot Manager menu for the system on which you want to boot HP OpenVMS.  
Log in to the management processor, and enter COto select the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
2. At the EFI Boot Manager menu, select an item from the boot options list.  
Each item in the boot options list references a specific boot device and provides a specific  
set of boot options or arguments to use when booting the device.  
3. Press Enter to initiate booting using the selected boot option.  
4. Exit the console and management processor interfaces when you have finished using them.  
To exit the EFI environment press ^B (Control+B); this exits the system console and returns  
to the management processor Main Menu. To exit the management processor, enter Xat the  
Main Menu.  
Procedure 4-12 Booting HP OpenVMS (EFI Shell)  
From the EFI Shell environment, to boot HP OpenVMS on a device first access the EFI System  
Partition for the root device (for example fs0:), and enter \efi\vms\vms_loaderto initiate  
the OpenVMS loader.  
1. Access the EFI Shell environment for the system on which you want to boot HP OpenVMS.  
Log in to the management processor, and enter COto select the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
Booting and Shutting Down HP OpenVMS I64 131  
       
2. At the EFI Shell environment, issue the mapcommand to list all currently mapped bootable  
devices.  
The bootable file systems of interest typically are listed as fs0:, fs1:, and so on.  
3. Access the EFI System Partition for the device from which you want to boot HP OpenVMS  
(fsX:, where X is the file system number).  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
Also, the file system number might change each time it is mapped (for example, when the  
system boots, or when the map -rcommand is issued).  
4. When accessing the EFI System Partition for the desired boot device, issue the  
\efi\vms\vms_loadercommand to initiate the vms_loader.efiloader on the device  
you are accessing.  
fs5:> \efi\vms\vms_loader.efi  
HP OpenVMS Industry Standard 64 Operating System, Version V8.3  
Copyright 1976-2005 Hewlett-Packard Development Company, L.P.  
%PKA0, Copyright (c) 1998 LSI Logic PKW V3.2.20 ROM 4.19  
%PKA0, SCSI Chip is SYM53C1010/66, Operating mode is LVD Ultra3 SCSI  
%SMP-I-CPUTRN, CPU #01 has joined the active set.  
%SMP-I-CPUTRN, CPU #02 has joined the active set.  
...  
5. Exit the console and management processor interfaces when you have finished using them.  
To exit the EFI environment press ^B (Control+B); this exits the system console and returns  
to the management processor Main Menu. To exit the management processor, enter Xat the  
Main Menu.  
Shutting Down HP OpenVMS  
This section describes how to shut down the HP OpenVMS OS on cell-based HP Integrity servers.  
Procedure 4-13 Shutting Down HP OpenVMS  
From the HP OpenVMS command line, issue the @SYS$SYSTEM:SHUTDOWNcommand to shut  
down the OpenVMS OS.  
1. Log in to HP OpenVMS running on the system that you want to shut down.  
Log in to the management processor (MP) for the server and use the Console menu to access  
the system console. Accessing the console through the MP enables you to maintain console  
access to the system after HP OpenVMS has shut down.  
132 Booting and Shutting Down the Operating System  
 
2. At the OpenVMS command line (DCL) issue the @SYS$SYSTEM:SHUTDOWNcommand and  
specify the shutdown options in response to the prompts given.  
>@SYS$SYSTEM:SHUTDOWN  
SHUTDOWN -- Perform an Orderly System Shutdown  
on node RSNVMS  
How many minutes until final shutdown [0]:  
Reason for shutdown [Standalone]:  
Do you want to spin down the disk volumes [NO]?  
Do you want to invoke the site-specific shutdown procedure [YES]?  
Should an automatic system reboot be performed [NO]? yes  
When will the system be rebooted [shortly via automatic reboot]:  
Shutdown options (enter as a comma-separated list):  
REBOOT_CHECK  
SAVE_FEEDBACK  
Check existence of basic system files  
Save AUTOGEN feedback information from this boot  
DISABLE_AUTOSTART Disable autostart queues  
POWER_OFF  
Request console to power-off the system  
Shutdown options [NONE]:  
%SHUTDOWN-I-OPERATOR, this terminal is now an operators console  
...  
NOTE: HP OpenVMS I64 currently does not support the POWER_OFFshutdown option.  
The SYS$SYSTEM:SHUTDOWN.COMcommand prompts establish the shutdown behavior,  
including the shutdown time and whether the system is rebooted after it is shut down.  
To perform a reboot for reconfig from OpenVMS I64 running on an nPartition, issue  
@SYS$SYSTEM:SHUTDOWN.COMfrom OpenVMS, and then enter Yesat the “Should  
an automatic system reboot be performed” prompt.  
To perform a shutdown for reconfig of an nPartition running OpenVMS I64:  
1. Issue @SYS$SYSTEM:SHUTDOWN.COMfrom OpenVMS and enter Noat the “Should  
an automatic system reboot be performed” prompt.  
2. Access the management processor and, from the management processor Command  
Menu, issue the RRcommand and specify the nPartition. The nPartition you specify  
will be put in the shutdown for reconfig state.  
Booting and Shutting Down Microsoft Windows  
This section presents procedures for booting and shutting down the Microsoft Windows OS on  
cell-based HP Integrity servers and a procedure for adding Windows to the boot options list.  
To determine whether the cell local memory (CLM) configuration is appropriate for Windows,  
To add a Windows entry to the boot options list, refer to Adding Microsoft Windows to  
To boot Windows, refer to “Booting Microsoft Windows” (page 135).  
To shut down Windows, refer to “Shutting Down Microsoft Windows” (page 137).  
Microsoft Windows Support for Cell Local Memory  
On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter,  
which determines how firmware interleaves memory residing on the cell.  
Booting and Shutting Down Microsoft Windows 133  
   
IMPORTANT: Microsoft Windows supports using CLM on cell-based HP Integrity servers. For  
best performance in an nPartition running Windows, HP recommends that you configure the  
CLM parameter to 100 percent for each cell in the nPartition.  
To check CLM configuration details from an OS, use Partition Manager or the parstatus  
command. For example, the parstatus -V -c# command and parstatus -V -p# command  
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where  
# is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For  
details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://  
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use  
the info memcommand. If the amount of noninterleaved memory reported is less than 512 MB,  
then no CLM is configured for any cells in the nPartition (and the indicated amount of  
noninterleaved memory is used by system firmware). If the info memcommand reports more  
than 512 MB of noninterleaved memory, then use Partition Manager or the parstatuscommand  
to confirm the CLM configuration details.  
To set the CLM configuration, use Partition Manager or the parmodifycommand. For details,  
refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/  
Adding Microsoft Windows to the Boot Options List  
To add a Microsoft Windows entry to the system boot options list, you must do so from EFI. Use  
the \MSUtil\nvrboot.efiutility to import boot options from the  
EFI\Microsoft\WINNT50\Boot00...file on the device from which Windows is loaded.  
See “Boot Options List” (page 114) for additional information about saving, restoring, and creating  
boot options.  
NOTE: On HP Integrity servers, the OS installer automatically adds an entry to the boot options  
list.  
Procedure 4-14 Adding a Microsoft Windows Boot Option  
This procedure adds the Microsoft Windows item to the boot options list.  
1. Access the EFI Shell environment.  
Log in to the management processor, and enter COto access the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot Microsoft  
Windows (fsX:where X is the file system number).  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
The full path for the Microsoft Windows loader is \efi\microsoft\winnt50\  
ia64ldr.efi, and it should be on the device you are accessing. (However, you must initiate  
this loader only from the EFI Boot Menu and not from the EFI Shell.)  
3. List the contents of the \EFI\Microsoft\WINNT50directory to identify the name of the  
Windows boot option file (Boot00nn) that you want to import into the system boot options  
list.  
134 Booting and Shutting Down the Operating System  
 
fs0:\> ls EFI\Microsoft\WINNT50  
Directory of: fs0:\EFI\Microsoft\WINNT50  
09/18/03 11:58a <DIR>  
09/18/03 11:58a <DIR>  
12/18/03 08:16a  
1 File(s)  
1,024 .  
1,024 ..  
354 Boot0001  
354 bytes  
2 Dir(s)  
fs0:\>  
4. At the EFI Shell environment, issue the \MSUtil\nvrboot.eficommand to launch the  
Microsoft Windows boot options utility.  
fs0:\> msutil\nvrboot  
NVRBOOT: OS Boot Options Maintenance Tool [Version 5.2.3683]  
1. SUSE SLES 9  
2. HP-UX Primary Boot: 0/0/1/0/0.2.0  
* 3. Windows Server 2003, Datacenter  
4. EFI Shell [Built-in]  
* = Windows OS boot option  
(D)isplay (M)odify (C)opy E(x)port (I)mport (E)rase (P)ush (H)elp (Q)uit  
Select>  
5. Use the Importcommand to import the Windows boot options file.  
Select> i  
Enter IMPORT file path: \EFI\Microsoft\WINNT50\Boot0001  
Imported Boot Options from file: \EFI\Microsoft\WINNT50\Boot0001  
Press enter to continue  
6. Press Q to quit the NVRBOOT utility, and exit the console and management processor  
interfaces if you are finished using them.  
To exit the EFI environment press ^B (Control+B); this exits the system console and returns  
to the management processor Main Menu. To exit the management processor, enter Xat the  
Main Menu.  
Booting Microsoft Windows  
You can boot the Windows Server 2003 OS on an HP Integrity server by using the EFI Boot  
Manager to choose the appropriate Windows item from the boot options list.  
Booting and Shutting Down Microsoft Windows 135  
 
Refer to “Shutting Down Microsoft Windows” (page 137) for details on shutting down the  
Windows OS.  
CAUTION:  
ACPI Configuration for Windows Must Be windows On cell-based HP Integrity servers, to boot  
the Windows OS, an nPartition ACPI configuration value must be set to windows.  
At the EFI Shell, enter the acpiconfigcommand with no arguments to list the current ACPI  
configuration. If the acpiconfigvalue is not set to windows, then Windows cannot boot. In  
this situation, you must reconfigure acpiconfig; otherwise, booting will be interrupted with  
a panic when Windows is launched.  
To set the ACPI configuration for Windows: At the EFI Shell enter the acpiconfig windows  
command, and then enter the resetcommand for the nPartition to reboot with the proper  
(windows) configuration for Windows.  
NOTE:  
Microsoft Windows Booting on HP Integrity Servers The recommended method for booting  
Windows is to use the EFI Boot Manager menu to choose a Windows entry from the boot options  
list. Using the ia64ldr.efiWindows loader from the EFI Shell is not recommended.  
Procedure 4-15 Windows Booting  
From the EFI Boot Manager menu, select an item from the boot options list to boot Windows  
using that boot option. The EFI Boot Manager is available only on HP Integrity servers.  
configuration details.  
1. Access the EFI Boot Manager menu for the system on which you want to boot Windows.  
Log in to the management processor, and enter COto access the Console list. Select the  
nPartition console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
2. At the EFI Boot Manager menu, select an item from the boot options list.  
Each item in the boot options list references a specific boot device and provides a specific  
set of boot options or arguments to be used when booting the device.  
3. Press Enter to initiate booting using the chosen boot option.  
4. When Windows begins loading, wait for the Special Administration Console (SAC) to become  
available.  
The SAC interface provides a text-based administration tool that is available from the  
nPartition console. For details, refer to the SAC online help (type ?at the SAC>prompt).  
Loading.: Windows Server 2003, Datacenter  
Starting: Windows Server 2003, Datacenter  
Starting Windows...  
********************************************************************************  
Computer is booting, SAC started and initialized.  
Use the "ch -?" command for information about using channels.  
Use the "?" command for general help.  
SAC>  
136 Booting and Shutting Down the Operating System  
 
5. Exit the console and management processor interfaces if you are finished using them.  
To exit the console environment, press ^B (Control+B); this exits the console and returns  
to the management processor Main menu. To exit the management processor, enter Xat the  
Main menu.  
Shutting Down Microsoft Windows  
You can shut down the Windows OS on HP Integrity servers using the Start menu or the  
shutdowncommand.  
CAUTION: Do not shut down Windows using Special Administration Console (SAC) restart  
or shutdowncommands under normal circumstances.  
Issuing restartor shutdownat the SAC>prompt causes the system to restart or shut down  
immediately and can result in the loss of data.  
Instead, use the Windows Start menu or the shutdowncommand to shut down without loss of  
data.  
To shut down Windows use either of the following methods.  
Select Shut Down from the Start menu, and select either Restart or Shut down from the  
drop-down menu.  
Selecting the Restart menu item shuts down and restarts the system. Selecting the Shut  
down menu item shuts down the system.  
You can use this method when using the Windows graphical interface.  
Issue the shutdowncommand from the Windows command line.  
Refer to the procedure “Windows Shutdown from the Command Line” (page 137) for details.  
You can issue this command from a command prompt through the Special Administration  
Console (SAC) or from any other command line.  
The Windows shutdowncommand includes the following options:  
/s  
Shut down the system. This is the equivalent of Start>Shut Down, Shut down.  
/r  
Shut down and restart the system. This is the equivalent of Start>Shut Down,  
Restart.  
/a  
Abort a system shutdown.  
/t xxx  
Set the timeout period before shutdown to xxx seconds. The timeout period can  
range from 0–600, with a default of 30.  
Refer to the help shutdownWindows command for details.  
Procedure 4-16 Windows Shutdown from the Command Line  
From the Windows command line, issue the shutdowncommand to shut down the OS.  
1. Log in to Windows running on the system that you want to shut down.  
For example, access the system console and use the Windows SAC interface to start a  
command prompt, from which you can issue Windows commands to shut down the system.  
2. Check whether any users are logged in.  
Use the query useror query sessioncommand.  
Booting and Shutting Down Microsoft Windows 137  
   
3. Issue the shutdowncommand and the appropriate options to shut down the Windows  
Server 2003 on the system.  
You have the following options when shutting down Windows:  
To shut down Windows and reboot: shutdown /r  
Alternatively, you can select the Start > Shut Down action and select Restart from  
the drop-down menu.  
To shut down Windows and not reboot (either power down server hardware or put an  
nPartition into a shutdown for reconfiguration state): shutdown /s  
Alternatively, you can select the Start > Shut Down action and select Shut down  
from the drop-down menu.  
To abort a shutdown (stop a shutdown that has been initiated): shutdown /a  
For example:  
shutdown /r /t 60 /c "Shut down in one minute."  
This command initiates a Windows system shutdown-and-reboot after a timeout period of  
60 seconds. The /coption specifies a message that is broadcast to any other users of the  
system.  
Booting and Shutting Down Linux  
This section presents procedures for booting and shutting down the Linux OS on cell-based HP  
Integrity servers and a procedure for adding Linux to the boot options list.  
To determine whether the cell local memory (CLM) configuration is appropriate for Red  
Hat Enterprise Linux or SuSE Linux Enterprise Server, refer to “Linux Support for Cell Local  
To add a Linux entry to the nPartition boot options list, refer to Adding Linux to the Boot  
To boot Red Hat Enterprise Linux, refer to “Booting Red Hat Enterprise Linux” (page 140).  
To boot SuSE Linux Enterprise Server, refer to “Booting SuSE Linux Enterprise Server ”  
To shut down Linux, refer to “Shutting Down Linux” (page 142).  
Linux Support for Cell Local Memory  
On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter,  
which determines how firmware interleaves memory residing on the cell.  
IMPORTANT: Red Hat Enterprise Linux 4 Update 4 and later, Red Hat Enterprise Linux 5 and  
later, and SUSE Enterprise Linux 10 and later, supports using CLM.  
To check CLM configuration details from an OS, use Partition Manager or the parstatus  
command. For example, the parstatus -V -c# command and parstatus -V -p# command  
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where  
# is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For  
details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://  
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use  
the info memcommand. If the amount of noninterleaved memory reported is less than 512 MB,  
then no CLM is configured for any cells in the nPartition (and the indicated amount of  
noninterleaved memory is used by system firmware). If the info memcommand reports more  
138 Booting and Shutting Down the Operating System  
   
than 512 MB of noninterleaved memory, then use Partition Manager or the parstatuscommand  
to confirm the CLM configuration details.  
To set the CLM configuration, use Partition Manager or the parmodifycommand. For details,  
refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/  
Adding Linux to the Boot Options List  
This section describes how to add a Linux entry to the system boot options list. The processes  
for adding both Red Hat Enterprise Linux and SuSE Linux Enterprise Servers are given here.  
You can add the \EFI\redhat\elilo.efiloader or the \efi\SuSE\elilo.efiloader to  
the boot options list from the EFI Shell or EFI Boot Configuration menu (or in some versions of  
EFI, the Boot Option Maintenance Menu).  
See “Boot Options List” (page 114) for additional information about saving, restoring, and creating  
boot options.  
NOTE: On HP Integrity servers, the OS installer automatically adds an entry to the boot options  
list.  
Procedure 4-17 Adding a Linux Boot Option  
This procedure adds a Linux item to the boot options list.  
1. Access the EFI Shell environment.  
Log in to the management processor, and enter COto access the system console.  
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main  
EFI menu). If you are at another EFI menu, select the Exit option from the submenus until  
you return to the screen with the EFI Boot Managerheading.  
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell  
environment.  
2. Access the EFI System Partition for the device from which you want to boot Linux (fsX:,  
where X is the file system number).  
For example, enter fs2:to access the EFI System Partition for the bootable file system  
number 2. The EFI Shell prompt changes to reflect the file system currently accessed.  
The full path for the Red Hat Enterprise Linux loader is \EFI\redhat\elilo.efi, and  
it should be on the device you are accessing.  
The full path for the SuSE Linux Enterprise Server loader is \efi\SuSE\elilo.efi, and  
it should be on the device you are accessing.  
3. At the EFI Shell environment, use the bcfgcommand to manage the boot options list.  
The bcfgcommand includes the following options for managing the boot options list:  
bcfg boot dump— Display all items in the boot options list for the system.  
bcfg boot rm # — Remove the item number specified by # from the boot options  
list.  
bcfg boot mv #a #b — Move the item number specified by #a to the position specified  
by #b in the boot options list.  
bcfg boot add # file.efi "Description"— Add a new boot option to the position in  
the boot options list specified by #. The new boot option references file.efi and is listed  
with the title specified by Description.  
For example, bcfg boot add 1 \EFI\redhat\elilo.efi "Red Hat  
Enterprise Linux"adds a Red Hat Enterprise Linux item as the first entry in the  
boot options list.  
Booting and Shutting Down Linux 139  
 
Likewise, bcfg boot add 1 \efi\SuSE\elilo.efi "SLES 9"adds a SuSE Linux  
item as the first entry in the boot options list.  
Refer to the help bcfgcommand for details.  
4. Exit the console and management processor interfaces if you are finished using them.  
To exit the EFI environment press ^B (Control+B); this exits the system console and returns  
to the management processor Main Menu. To exit the management processor, enter Xat the  
Main Menu.  
Booting Red Hat Enterprise Linux  
You can boot the Red Hat Enterprise Linux OS on HP Integrity servers using either of the methods  
described in this section.  
Refer to “Shutting Down Linux” (page 142) for details on shutting down the Red Hat Enterprise  
Linux OS.  
CAUTION:  
ACPI Configuration for Red Hat Enterprise Linux Must Be default On cell-based HP Integrity  
servers, to boot the Red Hat Enterprise Linux OS, an nPartition ACPI configuration must be  
value set to default.  
At the EFI Shell, enter the acpiconfigcommand with no arguments to list the current ACPI  
configuration. If the acpiconfigvalue is not set to default, then Red Hat Enterprise Linux  
could panic. In this situation, you must reconfigure acpiconfigto eliminate any bus address  
conflicts and ensure all I/O slots have unique addresses.  
To set the ACPI configuration for Red Hat Enterprise Linux:  
At the EFI Shell enter the acpiconfig defaultcommand.  
Enter the resetcommand for the nPartition to reboot with the proper (default)  
configuration for Red Hat Enterprise Linux.  
Use either of the following methods to boot Red Hat Enterprise Linux:  
Choose a Red Hat Enterprise Linux entry from the EFI Boot Manager menu.  
To load the Red Hat Enterprise Linux OS at the EFI Boot Manager menu, choose its entry  
from the list of boot options.  
Choosing a Linux entry from the boot options list boots the OS using ELILO.EFIloader  
and the elilo.conffile.  
Initiate the ELILO.EFILinux loader from the EFI Shell.  
After choosing the file system for the boot device (for example, fs0:), you can initiate the  
Linux loader from the EFI Shell prompt by entering the full path for the ELILO.EFIloader.  
On a Red Hat Enterprise Linux boot device EFI System Partition, the full paths to the loader  
and configuration files are: \EFI\redhat\elilo.efi  
\EFI\redhat\elilo.conf  
By default the ELILO.EFIloader boots Linux using the kernel image and parameters specified  
by the default entry in the elilo.conffile on the EFI System Partition for the boot device.  
To interact with the ELILO.EFIloader, interrupt the boot process (for example, type a space)  
at the ELILO bootprompt. To exit the ELILO.EFIloader, use the exitcommand.  
Procedure 4-18 Booting Red Hat Enterprise Linux (EFI Shell)  
Use this procedure to boot Red Hat Enterprise Linux from the EFI Shell.  
140 Booting and Shutting Down the Operating System  
     
required configuration details.  
1. Access the EFI Shell.  
From the system console, select the EFI Shell entry from the EFI Boot Manager menu to  
access the shell.  
2. Access the EFI System Partition for the Red Hat Enterprise Linux boot device.  
Use the mapEFI Shell command to list the file systems (fs0, fs1, and so on) that are known  
and have been mapped.  
To select a file system to use, enter its mapped name followed by a colon (:). For example,  
to operate with the boot device that is mapped as fs3, enter fs3:at the EFI Shell prompt.  
3. Enter ELILOat the EFI Shell command prompt to launch the ELILO.EFIloader.  
If needed, you can specify the loaders full path by entering \EFI\redhat\eliloat the  
EFI Shell command prompt.  
4. Allow the ELILO.EFIloader to proceed with booting the Red Hat Enterprise Linux kernel.  
By default, the ELILO.EFIloader boots the kernel image and options specified by the  
default item in the elilo.conffile.  
To interact with the ELILO.EFIloader, interrupt the boot process (for example, type a  
space) at the ELILO bootprompt. To exit the loader, use the exitcommand.  
Booting SuSE Linux Enterprise Server  
You can boot the SuSE Linux Enterprise Server 9 OS on HP Integrity servers using either of the  
methods described in this section.  
Refer to “Shutting Down Linux” (page 142) for details on shutting down the SuSE Linux Enterprise  
Server OS.  
CAUTION:  
ACPI Configuration for SuSE Linux Enterprise Server Must Be default On cell-based HP Integrity  
servers, to boot the SuSE Linux Enterprise Server OS, an nPartition ACPI configuration value  
must be set to default.  
At the EFI Shell, enter the acpiconfigcommand with no arguments to list the current ACPI  
configuration. If the acpiconfigvalue is not set to default, then SuSE Linux Enterprise Server  
could panic.  
To set the ACPI configuration for SuSE Linux Enterprise Server:  
At the EFI Shell enter the acpiconfig defaultcommand.  
Enter the resetcommand for the nPartition to reboot with the proper (default)  
configuration for SuSE Linux Enterprise Server.  
Use either of the following methods to boot SuSE Linux Enterprise Server:  
Choose a SuSE Linux Enterprise Server entry from the EFI Boot Manager menu.  
To load the SuSE Linux Enterprise Server OS at the EFI Boot Manager menu, choose its entry  
from the list of boot options.  
Choosing a Linux entry from the boot options list boots the OS using ELILO.EFIloader  
and the elilo.conffile.  
Initiate the ELILO.EFILinux loader from the EFI Shell.  
details.  
Booting and Shutting Down Linux 141  
   
After choosing the file system for the boot device (for example, fs0:), you can initiate the  
Linux loader from the EFI Shell prompt by entering the full path for the ELILO.EFIloader.  
On a SuSE Linux Enterprise Server boot device EFI System Partition, the full paths to the  
loader and configuration files are:  
\efi\SuSE\elilo.efi  
\efi\SuSE\elilo.conf  
By default the ELILO.EFIloader boots Linux using the kernel image and parameters specified  
by the default entry in the elilo.conffile on the EFI System Partition for the boot device.  
To interact with the ELILO.EFIloader, interrupt the boot process (for example, type a space)  
at the ELILO bootprompt. To exit the ELILO.EFIloader, use the exitcommand.  
Procedure 4-19 Booting SuSE Linux Enterprise Server (EFI Shell)  
Use this procedure to boot SuSE Linux Enterprise Server 9 from the EFI Shell.  
required configuration details.  
1. Access the EFI Shell.  
From the system console, select the EFI Shell entry from the EFI Boot Manager menu to  
access the shell.  
2. Access the EFI System Partition for the SuSE Linux Enterprise Server boot device.  
Use the mapEFI Shell command to list the file systems (fs0, fs1, and so on) that are known  
and have been mapped.  
To select a file system to use, enter its mapped name followed by a colon (:). For example,  
to operate with the boot device that is mapped as fs3, enter fs3:at the EFI Shell prompt.  
3. Enter ELILOat the EFI Shell command prompt to launch the ELILO.EFIloader.  
If needed, you can specify the loaders full path by entering \efi\SuSE\eliloat the EFI  
Shell command prompt.  
4. Allow the ELILO.EFIloader to proceed with booting the SuSE Linux kernel.  
By default, the ELILO.EFIloader boots the kernel image and options specified by the  
default item in the elilo.conffile.  
To interact with the ELILO.EFIloader, interrupt the boot process (for example, type a  
space) at the ELILO bootprompt. To exit the loader, use the exitcommand.  
Shutting Down Linux  
Use the shutdowncommand to shut down Red Hat Enterprise Linux or SuSE Linux Enterprise  
Server.  
The Red Hat Enterprise Linux and SuSE Linux Enterprise Server shutdowncommand includes  
the following options:  
-h  
Halt after shutdown.  
On cell-based HP Integrity servers, this either powers down server hardware or puts  
the nPartition into a shutdown for reconfiguration state.  
Use the PEcommand at the management processor Command Menu to manually power  
on or power off server hardware, as needed.  
-r  
-c  
Reboot after shutdown.  
Cancel an already running shutdown.  
142 Booting and Shutting Down the Operating System  
   
time  
When to shut down (required). You can specify the time option in any of the following  
ways:  
Absolute time in the format hh:mm, in which hh is the hour (one or two digits) and  
mm is the minute of the hour (two digits).  
Number of minutes to wait in the format +m, in which m is the number of minutes.  
nowto immediately shut down; this is equivalent to using +0to wait zero minutes.  
Refer to the shutdown(8) Linux manpage for details. Also refer to the Linux manpage for the  
poweroffcommand.  
Procedure 4-20 Shutting Down Linux  
From the command line for Red Hat Enterprise Linux or SuSE Linux Enterprise Server, issue the  
shutdowncommand to shut down the OS.  
1. Log in to Linux running on the system you want to shut down.  
Log in to the management processor (MP) for the server and use the Console menu to access  
the system console. Accessing the console through the MP enables you to maintain console  
access to the system after Linux has shut down.  
2. Issue the shutdowncommand with the desired command-line options, and include the  
required time argument to specify when the operating shutdown is to occur.  
For example, shutdown -r +20will shut down and reboot the system starting in 20  
minutes.  
Booting and Shutting Down Linux 143  
144  
A sx2000 LEDs  
Table A-1 Front Panel LEDs  
LED  
Driven By  
PM  
State  
Meaning  
48V Good  
HKP Good  
MP Present  
Cabinet#  
On (green)  
On (green)  
On (green)  
48V is good  
PM  
Housekeeping is good.  
MP is installed in this cabinet.  
PM  
PM  
Numeric  
Dash  
Cabinet number  
Invalid cabinet ID  
Locate feature activated  
Flashing  
Attention  
MP  
Flashing Red  
Chassis log alert  
145  
   
Table A-2 Power and OL* LEDs  
LED  
Location  
Driven By  
State  
Meaning  
Cell Power  
Chassis beside cell and Cell LPM  
on cell  
Solid Green HKP, PWR_GOOD  
Cell Attention Chassis beside cell  
CLU  
Solid  
Yellow  
Cell OL*  
PDHC Post  
PM Post  
Cell  
PDHC  
MOP  
0x0  
PDHC Post or run state  
oxf  
0xe->0x1  
On the UGUY board,  
driven by the PM  
0x0  
No HKP  
0xf  
MOP is reset or dead  
PM Post or run state  
0xe->0x1  
CLU Post  
On the UGUY board,  
driven by the CLU  
SARG  
0x0  
No HKP  
0xf  
CLU is reset or dead  
CLU Post or run state  
0xe->0x1  
PCI Card cage Chassis behind PCI card CLU  
Solid  
PCI card cage OL* LED,  
Link cable OL*  
Attention  
cage  
Yellow  
Link Cable  
OL*  
Main Backplane  
CLU  
Solid  
Yellow  
MP Post  
SBCH  
SBCH  
0x0  
No HKP  
0xf  
MP is reset or dead  
MP Post or run state  
0xe->0x1  
Cabinet and  
IO Bay Fans  
Each fan  
PM  
Solid Green Fan is running and no fault  
Backplane  
Power Boards  
System Backplane  
RPM  
Solid Green Power supply is running  
Yellow  
Power fault  
Blinking  
Hot swap  
oscillators  
(HSO)  
System Backplane  
RPM  
Solid Green HSO Supply running  
Solid  
HSO clock fault  
Yellow  
146 sx2000 LEDs  
 
Figure A-1 Utilities  
Table A-3 OL* LED States  
Description  
Power (Green)  
OL* (Yellow)  
Off  
Normal operation (powered)  
Fault detected, power on  
On  
On  
Flashing  
On  
Slot selected, power on, not ready for On  
OLA/D  
Power off or slot available  
Fault detected, power off  
Ready for OL*  
Off  
Off  
Off  
Off  
Flashing  
On  
Figure A-2 PDH Status  
A label on the outside of the SDCPB Frame indicates PDH Status, dc/dc converter faults that  
shutdown the sx2000 cell, and loss of dc/dc converter redundancy. Figure A-2 illustrates the label  
and Table A-4 describes each LED.  
147  
     
NOTE: The Power Good LED is a bicolor LED (green and yellow).  
Table A-4 PDH Status and Power Good LED States  
LED  
Description  
Definition  
BIB  
Boot Is Blocked  
When illuminated, it tells the end user that the  
system is ready to boot.  
SMG  
Shared Memory Good  
This references non-volatile memory that  
manageability and system firmware share. When  
illuminated, the system is ready to begin fetching  
code.  
USB  
Universal Serial Bus  
Heart Beat  
When illuminated, PDHC is communicating with  
the MP.  
HB  
When blinking, the PDHC processor is running  
and the cell board can be powered on.  
Power Good  
Power Good  
Solid green - All power is operating within  
specifications.  
Blinking yellow - Voltage rail(s) have been lost  
and the cell board has shutdown.  
Solid green but blinking yellow - Cell board is  
still operating, but one of the redundant  
converters has failed on one of the voltage rails.  
148 sx2000 LEDs  
 
B Management Processor Commands  
This appendix summarizes the management processor (MP) commands. In the examples, MP is  
used as the command prompt.  
NOTE: The term Guardian Service Processor has been changed to Management Processor, but  
some code already written uses the old term.  
BO Command  
BO - Boot partition  
Access level—Single PD user  
Scope—partition  
This command boots the specified partition. It ensures that all the cells assigned to the target  
partition have valid complex profiles and then releases Boot-Is-Blocked (BIB).  
Example B-1 BO command  
CA Command  
CA - Configure Asynchronous & Modem Parameters  
Access level—Operator  
Scope—Complex  
There is one active RS232 port connection to the service processors textual user interface. This  
RS232 connection is called the local RS232 port. The local RS232 port connects to a local terminal  
or to the CE laptop.  
NOTE: On the HUCB board, there is a remote RS232 connector. The remote RS232 system was  
used to connect to a modem on legacy systems. Modem support is removed, so connections to  
the remote RS232 connector are ignored.  
This command enables you to configure the local and remote console ports. The parameters you  
can configure are the baud rate, flow control, and modem type.  
BO Command 149  
       
Example B-2 CA Command  
CC Command  
CC - Complex Configuration  
Access level—Administrator  
Scope—Complex  
This command performs an initial out-of-the-box complex configuration. The system can be  
configured as either a single (user specified) cell in partition 0 (the genesis complex profile) or  
the last profile can be restored. The state of the complex prior to command execution has no  
bearing on the changes to the configuration. You must ensure that all other partitions are shut  
down before using this command. You might need to run the IDcommand after you create a  
genesis complex profile. If the genesis profile is selected, then all remaining cells are assigned to  
the free cell list.  
150 Management Processor Commands  
   
NOTE: This command does not boot any partitions. Use the BOcommand to boot needed  
partitions.  
NOTE: If possible, use a cell in the genesis complex profile that has a bootable device attached.  
Example B-3 CC Command  
CP Command  
CP - Cells Assigned by Partition  
Access Level - Single Partition User  
Scope - Complex  
The cpcommand displays a table of cells assigned to partitions and arranged by cabinets.  
CP Command 151  
   
NOTE: This is for display only, no configuration is possible with this command.  
Example B-4 CP Command  
DATE Command  
DATE Command - Set Date and Time.  
Access level—Administrator  
Scope—Complex  
This command changes the value of the real time clock chip on the MP.  
Example B-5 DATE Command  
DC Command  
DC - Default Configuration  
Access level—Administratrix  
Scope—Complex  
This command resets some or all of the configuration parameters to their default values.  
NOTE: The clock setting is not effected by the DC command.  
The following example shows the various parameters and their defaults.  
152 Management Processor Commands  
       
Example B-6 DC Command  
DF Command  
DF - Display FRUID  
Access level—Single Partition User  
Scope—Complex  
This command displays the FRUID data of the specified FRU. FRU information for the SBC, BPS,  
and processors are constructed because they do not have a FRU ID EEPROM. This makes the  
list of FRUs different than the list presented in the WF command.  
DF Command 153  
   
Example B-7 DF Command  
DI Command  
DI - Disconnect Remote or LAN Console  
Access level—Operator  
Scope—Complex  
This command initiates separate remote console or LAN console disconnect sequences. For the  
remote console, the modem control lines are deasserted, forcing the modem to hang up the  
telephone line. For the LAN console, the telnet connection is closed.  
If the console being disconnected has an access mode of single connection (see ERcommand),  
then it is disabled, otherwise it remains enabled after the connection is dropped.  
The number after the LAN console status is the number of LAN connections.  
There is one active RS232 port connection to the service processors textual user interface. This  
RS232 connection is called the local RS232 port. The local RS232 port connects to a local terminal  
or to the CE laptop.  
154 Management Processor Commands  
   
NOTE: On the HUCB board, there is a remote RS232 connector. The remote RS232 system was  
used to connect to a modem on legacy systems. For sx2000 servers, modem support is removed,  
so connections to the remote RS232 connector are ignored.  
Example B-8 DI Command  
DL Command  
DL - Disable LAN Access  
Access level—Administrator  
Scope—Complex  
This command disables telnet LAN access. Disabling telnet access kills all of the current telnet  
connections and causes future telnet connection requests to be sent a connection refused message.  
Example B-9 DL Command  
Example:  
In the following example, the administrator is connected by telnet to the MP. When DLruns, the  
telnet connection to the MP is closed.  
MP:CM> dl  
Disable telnet access and close open telnet connections? (Y/[N]) y  
WARNING: Answering yes will close this connection.  
Are you sure? (Y/[N]) y  
-> Telnet access disabled. All non-diagnostic connections closed.  
Connection closed by foreign host..  
NOTE: The DLcommand is deprecated and does not appear in the help menu. Use the SA and  
DI commands to control both telnet and SSH connections.  
EL Command  
EL - Enable LAN Access  
Access level—Administrator  
Scope—Complex  
This command enables telnet LAN access.  
DL Command 155  
       
Example B-10 EL Command  
MP:CM> el  
Enable telnet access? (Y/[N]) y  
-> Telnet access enabled.  
MP:CM>  
See also: DI, DL. Note that this command is deprecated and does not support SSH. Use the SA  
command instead.  
HE Command  
HE - Help Menu  
Scope—N/A  
Access level—Single PD user  
This command displays a list of all MP commands available to the level of the MP access  
(administrator, operator, or single PD user). The commands that are available in manufacturing  
mode are displayed if the MP is in manufacturing mode.  
In the following example, the MP is in manufacturing mode, so the manufacturing commands  
are shown in the last screen. This example is from a prerelease version of MP firmware.  
156 Management Processor Commands  
   
Example B-11 HE Command  
ID Command  
ID - Configure Complex Identification  
ID Command 157  
   
Access level—Operator  
Scope—Complex  
This command configures the complex identification information. The complex identification  
information includes the following:  
Model number  
Model string  
Complex serial number  
Complex system name  
Original product number  
Current product number  
Enterprise ID and diagnostic license  
This command is similar to the SSCONFIGcommand in ODE.  
The command is protected by an authentication mechanism. The MP generates a lock word, and  
you must supply an authentication key that is dependent on the lock word. A one minute fixed  
timeout protects against this command being entered inadvertently. This command has no effect  
if the timeout pops or the wrong authentication key is entered.  
This command is inoperable until the MP has determined the golden complex profile.  
NOTE: When the system powers on for the first time, you must run the CCcommand before  
you can run the IDcommand.  
Example B-12 ID Command  
IO Command  
IO - Display Connectivity Between Cells and I/O  
Access level—Single Partition User  
Scope—Complex  
This command displays a mapping of the connectivity between cells and I/O.  
158 Management Processor Commands  
   
Example B-13 Example:  
MP:CM> io--------------------------+Cabinet | 0 | 1 |--------+--------+--------+Slot  
|01234567|01234567|--------+--------+--------+Cell |XXXX....|........|IO Cab |0000....|........|IO Bay  
|0101....|........|IO Chas |1133....|........|MP:CM>See also: PS  
IT Command  
IT - View / Configure Inactivity Timeout Parameters  
Access level—Operator  
Scope—Complex  
This command sets the two inactivity timeouts.  
The session inactivity timeout prevents a session to a partition from being inadvertently left  
open, preventing other users from logging onto a partition using the same path. If the system  
session is hung or if the partition OS is hung, the ITcommand also prevents a session from being  
locked indefinitely.  
The second timeout is an MP-Handler command timeout. This stops a user from not completing  
a command and preventing other users from using the MP-Handler.  
Neither timeout can be deactivated.  
Example B-14 IT Command  
LC Command  
LC - LAN Configuration  
Access level—Administrator  
Scope—Complex  
This command displays and modifies the LAN configurations. The IP address, hostname, subnet  
mask, and gateway address can be modified with this command.  
IT Command 159  
       
Example B-15 LC Command  
LS Command  
LS - LAN Status  
Access level—Single Partition User  
Scope—Complex  
This command displays all parameters and current connection status of the LAN interface.  
Example B-16 LS Command  
MA Command  
MA - Main Menu  
160 Management Processor Commands  
       
Access level—Single Partition User  
Scope—N/A  
The command returns you from the command menu to the main menu. Only the user that enters  
the command is returned to the private main menu.  
Example B-17 MP Main Menu  
ND Command  
ND - Network Diagnostics  
Access level—Administrator  
Scope—Complex  
This command enables or disables network diagnostics. This enables or disables the Ethernet  
access to MP Ethernet ports other than the main telnet port (TCP port 23). Disabling the network  
diagnostic port prevents the user from accessing the system with diagnostic tools such as JUST,  
GDB, LDB and firmware update (FWUU).  
Example B-18 ND Command  
MP:CM> nd  
Network diagnostics are currently enabled.  
Do you want to disable network diagnostics? (Y/[N]) y  
-> Network diagnostics are disabled.  
MP:CM>  
MP:CM> nd  
Network diagnostics are currently disabled.  
Do you want to enable network diagnostics? (Y/[N]) y  
-> Network diagnostics are enabled.  
MP:CM>  
See also: DC Command  
PD Command  
PD - Set Default Partition  
Access level—Operator  
Scope—Complex  
This command sets the default partition. If a default partition already exists, then this command  
overrides the previously defined partition. Setting the default partition prevents the user from  
ND Command 161  
       
being forced to enter a partition in commands that require a partition for their operation. For  
example, this prevents a user from accidentally TOCing the wrong partition.  
A default partition is automatically set for users who are assigned the Single Partition User access  
level when they log in into the MP handler. A user assigned the Single Partition User access level  
can not change the default partition.  
When Administrator- or Operator-level users log in, their default partition is set to an invalid  
partition. The default partition for users of these access levels is maintained independently for  
each connection. When the user logs out of the MP handler, the default partition setting is not  
stored in nonvolatile storage.  
Example B-19 PD Command  
See also: RE and SO commands  
PE Command  
PE - Power Entity  
Access level—Operator  
Scope—Complex  
This command turns power on or off to the specified entity. If a default partition is defined, then  
the targeted entity must be a member of that partition. If the entity being powered on is an entire  
cabinet, this command interacts with the physical cabinet power switch. If the cabinet power  
switch is in the off position then this command does not turn cabinet power on. If this command  
is used to power off a cabinet, then the power switch is turned from on to off to on, the cabinet  
powers on. Powering on or off a cell also powers on or off any attached I/O backplane. Also  
powering on a cell powers on the I/O backplane attached to that cell first. The system backplane  
(HLSB) cannot be selected as an entity, and can only be controlled by the cabinet entity.  
Powering off a partition that is released from BIB can result in extraneous error events being  
stored in the event logs.  
162 Management Processor Commands  
   
Example B-20 PE Command for a Compute Cabinet  
[spudome] MP:CM> pe  
This command controls power enable to a hardware device.  
B - Cabinet  
C - Cell  
I - IO Chassis  
P - Partition  
Select Device: b  
Enter cabinet number: 0  
WARNING: Cabinet 0 is connected to cabinet 1. Cabinets 0 and 1 must be powered off and on such that both  
cabinets are off for an overlapping interval.  
If one cabinet is powered off then on while the other cabinet remains on, communications between the two  
cabinets will be lost.  
The power state is ON for cabinet 0.  
In what state do you want the power? (ON/OFF) off  
[spudome] MP:CM>  
[spudome] MP:CM> pe  
This command controls power enable to a hardware device.  
B - Cabinet  
C - Cell  
I - IO Chassis  
P - Partition  
Select Device: p  
# Name  
--- ----  
0) Partition 0  
1) Partition 1  
2) Partition 2  
3) Partition 3  
Select a partition number: 0  
The power state is OFF for partition 0.  
In what state do you want the power? (ON/OFF) on  
[spudome] MP:CM>  
See also: PS command  
PS Command  
PS - Power and Configuration Status  
Access level—Single Partition User  
Scope—Cabinet  
This command displays the status of the specified hardware.  
You can retrieve a summary or more detailed information on one of the following: a cabinet, a  
cell, a core IO, and the MP.  
PS Command 163  
   
Example B-21 PS Command  
RE Command  
RE - Reset Entity  
164 Management Processor Commands  
   
Access level—Operator  
Scope—Complex  
This command resets the specified entity. Be careful when resetting entities because of the side  
effects. Resetting an entity has the following side effects:  
The CLU sends the backplane_reset signal on the main backplane, which results in the  
following being reset:  
All XBCs, RCs, cells plugged into backplane, PDH interface chips, CCs, all CPUs except  
PDHC, any attached RIOs, all I/O adapters installed in the I/O backplanes associated with  
the above RIOs.  
The SINC sends the mpon signal to the PDH interface chip, which results in the following  
being reset:  
The PDH interface chip, CC, all CPUs except SINC, any attached RIO, all I/O adapters  
installed in the I/O backplane associated with the above RIO  
The CLU sends the iobackplane_reset signal to the appropriate I/O backplane, which results  
in the following being reset:  
RIO and all I/O adapters installed in the I/O backplane  
MP:CM> re  
This command resets a hardware device.  
C - Cell  
I - IO Chassis  
M - Main Backplane  
Select device: m  
Enter cabinet number: 0  
Do you want to reset the Main Backplane in Cabinet 0? (Y/[N]) y  
-> The selected device(s) will be reset.  
MP:CM>  
See also: PE command  
RL Command  
RL - Re-key Complex Profile Lock  
Access level—Operator  
Scope—Complex  
This command rekeys the complex profile lock. Use the RLcommand to recover from the error  
caused by the holder of the lock terminating before releasing the complex profile lock. It  
invalidates any outstanding key to the complex profile lock. There are up to 66 complex profile  
locks, one for each partition in section C and one key each for the A and B sections of the complex  
profile. The default partition is the default when prompting the user for which lock to rekey.  
RL Command 165  
 
Example B-22 Re-key lock for partition 3  
RR Command  
RR - Reset Partition for Re-configuration  
Access level—Single Partition User  
Scope—Partition  
This command resets the specified partition, but does not automatically boot it. The utility system  
resets each cell that is a member of the specified partition. If the user is either Administrator or  
Operator, you can choose a partition.  
Example B-23 RR Command  
RS Command  
RS - Reset Partition  
Access level—Single PD user  
Scope—Partition  
166 Management Processor Commands  
       
This command resets and boots the specified partition. The utility system resets each cell that is  
a member of the specified partition. Once all cells have reset, the partition boots. If you are either  
Administrator or Operator, you can choose a partition.  
Example B-24 RS Command  
SA Command  
SA - Set Access Parameters  
Access level—Administrator  
Scope—Complex  
This command modifies the enablement of interfaces including telnet, SSH, modem, network  
diagnostics, IPMI LAN, Web console, and so on.  
Example B-25 SA Command  
[spudome] MP:CM> sa  
This command displays and allows modification of access parameters.  
T - Telnet access : Enabled  
H - Secure Shell access : Enabled  
N - Network Diagnostics : Enabled  
D - DIAG Menu : Enabled  
I - IPMI Lan access : Enabled  
Select access mode to change :  
See also: EL, DL, DI, ND, PARPERM  
SO Command  
SO - Security Options and Access Control Configuration  
Access level—Administrator  
Scope—Complex  
This command modifies the security options and access control to the MP handler. The following  
parameters can be modified:  
Login timeout  
Number of password faults allowed  
SA Command 167  
       
Flow control timeouts  
User parameters:  
User name  
Organization name  
Access level  
Mode  
User state  
Example B-26 SO Command  
SYSREV Command  
SYSREV - Display System and Manageability Firmware Revisions  
Access level—Single Partition User  
Scope—Complex  
This command will display the firmware revisions of all of the entities in the complex.  
168 Management Processor Commands  
   
Example B-27 SYSREV Command  
MP:CM> sysrev  
Manageability Subsystem FW Revision Level: 7.14  
| Cabinet #0 |  
-----------------------+-----------------+  
| SYS FW | PDHC |  
Cell (slot 0) | 32.2 | 7.6 |  
Cell (slot 1) | 32.2 | 7.6 |  
Cell (slot 2) | 32.2 | 7.6 |  
Cell (slot 3) | 32.2 | 7.6 |  
Cell (slot 4) | | |  
Cell (slot 5) | | |  
Cell (slot 6) | | |  
Cell (slot 7) | | |  
| |  
MP | 7.14 |  
CLU | 7.6 |  
PM | 7.12 |  
CIO (bay 0, chassis 1) | 7.4 |  
CIO (bay 0, chassis 3) | 7.4 |  
CIO (bay 1, chassis 1) | 7.4 |  
CIO (bay 1, chassis 3) | 7.4 |  
MP:CM>  
TC Command  
TC - TOC Partition  
Access level—Single Partition User  
Scope—Partition  
This command transfers the control (TOC) of the specified partition. The SINC on each cell in  
the specified partition sends the sys_init signal to the PDH interface chip.  
Example B-28 TC Command  
TE Command  
TE - Tell  
TC Command 169  
       
Access level—Single Partition User  
Scope—Complex  
This command treats all characters following the TEas a message that is broadcast when <CR>  
is pressed. The message size is limited to 80 characters. Any extra characters are not broadcast.  
Also, any message that is written is not entered into the console log.  
NOTE: All users connected to the MP handler receive the message, irrespective of what partition  
the user sending the message has access to.  
Example B-29 TE Command  
VM Command  
VM - Voltage Margin  
Access level—Single Partition User  
Scope—Cabinet  
This command adjusts the voltage of all marginable supplies within a range of +/- 5%. No reset  
is required for this command to become effective.  
Example B-30 VM Command  
WHO Command  
WHO - Display List of Connected Users  
Access level—Single Partition User  
Scope—Complex  
170 Management Processor Commands  
       
This command displays the login name of the connected console client user and the port on  
which they are connected. For LAN console clients, the remote IP address is displayed.  
Example B-31 WHO Command  
XD Command  
XD - Diagnostic and Reset of MP  
Access level—Operator  
Scope—Complex  
This command tests certain functions of the SBC and SBCH boards.  
XD Command 171  
   
IMPORTANT: Some of the tests are destructive. Do not run this command on a system running  
the operating system.  
Example B-32 XD Command  
172 Management Processor Commands  
 
C Powering the System On and Off  
This appendix provides procedures to power a system on and off.  
Shutting Down the System  
Use this procedure to shut down the system.  
Checking System Configuration  
To check the current system configuration in preparation for shutdown, follow these steps:  
1. Open a command prompt window and connect to the MP (Figure C-1):  
telnet<hostname>  
Figure C-1 Connecting to the Host  
2. Enter the login and password at the MPprompt. The Main Menu appears (Figure C-2).  
Figure C-2 Main MP Menu  
Shutting Down the System 173  
         
3. Open the Command Menu by entering cmat the MPprompt.  
4. Make sure that no one else is using the system by entering whoat the CMprompt. Only one  
user should be seen, as indicated in Figure C-3.  
Figure C-3 Checking for Other Users  
5. Read and save the current system configuration by entering cpand the CMprompt. Cabinet  
and partition information appear (Figure C-4).  
Figure C-4 Checking Current System Configuration  
6. Go back to the Main Menu by entering maat the CMprompt.  
7. From the Main Menu, enter vfpto open the Virtual Front Panel (Figure C-5).  
Figure C-5 MP Virtual Front Panel  
174 Powering the System On and Off  
     
8. From the VFP, enter sto select the whole system or enter the partition number to select a  
particular partition. You should see an output similar to that shown in Figure C-6.  
Figure C-6 Example of Partition State  
9. Press ctrl+B to exit the Virtual Front Panel and return to the Main Menu.  
Shutting Down the Operating System  
You must shut down the operating system on each partition. From the Main Menu prompt, enter  
coto bring up the Partition Consoles Menu (Figure C-7).  
Figure C-7 Partition Consoles Menu  
For each partition, to shut down the OS, follow these steps:  
1. Enter the partition number at the prompt.  
2. Log in to the console:  
HP-UX: Log in as root  
Linux: Log in as root  
Windows: Log in as Administrator. From the Special Administration Console (SAC>  
prompt) enter cmdto start a new command prompt. Press Esc+Tab to switch to the  
channel for the command prompt and log in.  
Shutting Down the System 175  
       
3. At the console prompt, shut down and halt the operating system by entering the shutdown  
command.  
HP-UX: Enter the shutdown -hcommand  
Linux: Enter the shutdown -h<time>command, where <time> is the number of  
minutes until system shutdown  
Windows: Enter the shutdown /scommand  
4. Exit the partition console by entering ctrl+B after shutting down the system.  
5. Repeat step 1 through step 4 for each partition.  
Preparing the Partitions for Shutdown  
IMPORTANT: Before powering off the cabinets, HP recommends first that all partitions be  
brought to the boot-is-blocked (BIB) state. If any of the partitions do not stop at BIB, then wait  
for them to reach EFIor BCHand execute the RRcommand.  
To ensure that all partitions are ready to be shut down, follow these steps:  
1. From the CM>prompt, issue an rrcommand (Figure C-8).  
2. Enter the partition number, and when prompted for reset of the partition number, enter Y  
.
Figure C-8 Entering the rr Command  
3. At the CM> prompt, enter de -s(Figure C-9).  
4. From the demenu prompt, enter sto display the Cell PDH Controller.  
5. When prompted, enter the cabinet and cell board number on which the partition resides.  
6. Read the Cell PDH Controller status to determine if the partition is at BIB.  
176 Powering the System On and Off  
     
Figure C-9 Using the de -sCommand  
7. Repeat step 1 through step 6 for each partition.  
Powering Off the System  
To power off the system, follow these steps:  
1. From the Command Menu, enter pe(Figure C-10).  
Figure C-10 Power Entity Command  
2. Enter the number of the cabinet to power off. In Figure C-10, the number is 0.  
3. When prompted for the state of the cabinet power, enter off.  
4. Enter psat the CM>prompt to view the power status (Figure C-11).  
Shutting Down the System 177  
     
Figure C-11 Power Status First Window  
5. Enter bat the select device prompt to ensure that the cabinet power is off. The output should  
be similar to that in Figure C-12. The power switch is on, but the power is not enabled.  
Figure C-12 Power Status Second Window  
The cabinet is now powered off.  
Turning On Housekeeping Power  
To turn on housekeeping power to the system, follow these steps:  
1. Verify that the ac voltage at the input source is within specifications for each cabinet being  
powered on.  
178 Powering the System On and Off  
       
2. Ensure the following:  
The ac breakers are in the OFF position.  
The cabinet power switch at the front of the cabinet is in the OFF position.  
The ac breakers and cabinet switches on the I/O expansion (IOX) cabinet (if one is  
present) are in the OFF position.  
3. If the complex has an IOX cabinet, power on this cabinet first.  
IMPORTANT: The 48 V switch on the front panel must be OFF at this time.  
4. Turn on the ac breakers on the PDCAs at the back of the each cabinet.  
In a large complex, power on cabinets in one of the two following orders:  
9, 8, 1, 0  
8, 9, 0, 1  
On the front and back panels, the HKP and Present LEDs illuminate (Figure C-13).  
On cabinet 0, the HKP and the Present LEDs illuminate, but only the HKP LED illuminates  
on cabinet 1 (the right cabinet).  
Figure C-13 Front Panel Display with Housekeeping (HKP) Power and Present LEDs On  
Turning On Housekeeping Power 179  
     
5. Examine the bulk power supply (BPS) LEDs (Figure C-14).  
When on, the breakers on the PDCA distribute power to the BPSs. Power is present at the  
BPSs when:  
The amber LED labeled AC0 Present is on (if the breakers are on the PDCA on the left  
side at the back of the cabinet).  
The amber LED labeled AC1 Present is on (if the breakers are on the PDCA on the right  
side at the back of the cabinet).  
Figure C-14 BPS LEDs  
Powering On the System Using the PE Command  
To power on the system, follow these steps:  
1. From the Command Menu, enter the pecommand.  
IMPORTANT: If the complex has an IOX cabinet, power on this cabinet first.  
In a large complex, power on the cabinets in one of the two following orders:  
9, 8, 1, 0  
8, 9, 0, 1  
2. Enter Band then the cabinet number (Figure C-15).  
180 Powering the System On and Off  
     
Figure C-15 Power Entity Command  
3. Enter onto power on the cabinet.  
4. From the CM> prompt, enter psto observe the power status. The status screen shown in  
Figure C-16 appears.  
Figure C-16 Power Status First Window  
Powering On the System Using the PE Command 181  
   
5. At the Select Device prompt, enter Bthen the cabinet number to check the power status of  
the cabinet. Observe that the power switch is on and power is enabled as shown in  
Figure C-17 Power Status Window  
182 Powering the System On and Off  
 
D Templates  
This appendix contains blank floor plan grids and equipment templates. Combine the necessary  
number of floor plan grid sheets to create a scaled version of your computer room floor plan.  
Templates  
Figure D-1 illustrates the locations required for the cable cutouts.  
Figure D-2 “SD16 and SD32 Space Requirements” illustrates the overall dimensions required for  
SD16 and SD32 systems.  
Figure D-3 “SD64 Space Requirements” illustrates the overall dimensions required for an SD64  
complex.  
Figure D-1 Cable Cutouts and Caster Locations  
Templates 183  
     
Figure D-2 SD16 and SD32 Space Requirements  
184 Templates  
 
Figure D-3 SD64 Space Requirements  
Equipment Footprint Templates  
Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1  
foot). These templates show basic equipment dimensions and space requirements for servicing.  
The service areas shown on the template drawings are lightly shaded.  
Use equipment templates with the floor plan grid to define the location of the equipment to be  
installed in the computer room.  
NOTE: Photocopying typically changes the scale of copied drawings. If any templates are  
copied, then all templates and floor plan grids must also be copied.  
Computer Room Layout Plan  
To create a computer room layout, follow these steps:  
1. Remove several copies of the floor plan grid from this appendix.  
2. Cut and join them together (as necessary) to create a scale model floor plan of the computer  
room.  
3. Remove a copy of each applicable equipment footprint template.  
Templates 185  
             
4. Cut out each template selected in step 3, then place it on the floor plan grid created in step  
2.  
5. Position pieces until the desired layout is obtained then fasten the pieces to the grid. Mark  
the locations of the computer room doors, air conditioning floor vents, utility outlets, and  
so on.  
Figure D-4 Computer Floor Template  
186 Templates  
 
Figure D-5 Computer Floor Template  
Templates 187  
 
Figure D-6 Computer Floor Template  
188 Templates  
 
Figure D-7 Computer Floor Template  
Templates 189  
 
Figure D-8 Computer Floor Template  
190 Templates  
 
Figure D-9 SD32, SD64, and IOX Cabinet Templates  
Templates 191  
 
Figure D-10 SD32, SD64, and IOX Cabinet Templates  
192 Templates  
 
Figure D-11 SD32, SD64, and IOX Cabinet Templates  
Templates 193  
 
Figure D-12 SD32, SD64, and IOX Cabinet Templates  
194 Templates  
 
Figure D-13 SD32, SD64, and IOX Cabinet Templates  
Templates 195  
 
Figure D-14 SD32, SD64, and IOX Cabinet Templates  
196 Templates  
 
Index  
ID description, 29  
A
link interleaving, 35  
ac power verification  
major component locations, 30  
memory bank attribute table, 34  
memory error protection, 35  
memory interconnect, 33  
memory interleaving, 34  
memory shown on, 32  
memory system, 32  
4-wire PDCA, 84  
5-wire PDCA, 84  
AC0 Present LED, 98, 180  
AC1 Present LED, 98, 180  
acoustic noise specifications  
sound power level, 56  
sound pressure level, 56  
air handling spaces, 20  
American Society of Heating, Refrigerating and  
Air-Conditioning Engineers, (see ASHRAE)  
ASHRAE Class 1, 49, 50, 57  
attention LED, 179  
OL*, 37  
platform dependent hardware, 36  
processor dependent code, 36  
processor dependent table, 36  
reset signals, 37  
verifying presence of, 108  
CFM rating, 56  
B
checklist  
bezel  
repackaging, 71  
attaching front bezel, 82  
attaching rear bezel, 81  
attaching side bezels, 75  
blower bezels (See also "bezel"), 75  
blower housings  
circuit board dimensions and weight, 49  
circuit breaker sizing  
3-phase, 4-wire input, 51  
3-phase, 5-wire input, 51  
nuisance tripping, 51  
claims procedures, 62  
clock and utilities board, (see CLU)  
clock cable description, 44  
CLU  
installing, 72  
unpacking, 72  
booting  
checking cabinet power status, 107  
checking installed cell slot locations, 107  
invoking the EFI shell, 105  
output from the EFI shell, 105  
system verification, 101  
to the EFI boot manager menu, 104  
viewing UGUY LED status, 107  
BPS, 19  
general description, 23  
I/O power board sensor monitor, 21  
information gathered by, 23  
status seen in window, 108  
system clock source location, 22  
UGUY location, 22  
communications interference, 59  
compact flash  
bulk power supply, (see BPS)  
general description, 25  
parameters stored in, 25  
computer room layout plan, 185  
connecting I/O cables, 91  
cooling system  
C
cabinet ID description, 29  
cabinet unpacking, 63  
cable groomer, 92  
cables  
blowers, 21  
clock description, 44  
connecting I/O, 91  
e-Link description, 42  
external e-Link description, 42  
labeling I/O, 91  
m-Link description, 42  
routing I/O, 91  
I/O fans, 21  
inlet air sensor location, 21  
crossbar chip, (see XBC)  
customer LAN, 99  
customer signoff, 112  
D
cell board  
damage  
cell controller, 30  
cell map, 34  
coherency controller diagram, 35  
DIMM architecture and description, 33  
DIMM mixing rules, 33, 48  
DRAM erasure, 35  
ejectors, 110  
returning equipment, 71  
shipping containers, 61  
dimensions and weights, 49  
DIMM  
mixing rules, 33, 48  
discharge  
electrostatic, 59  
197  
 
door installation  
DIP switch purpose, 18, 24  
general description, 25  
back, 79  
front, 79  
shown outside system, 25  
humidity specifications, 54  
DP rated power cables, 20  
dual in-line memory module, (see DIMM)  
dual-die processors, 19  
I
I/O subsystem  
E
detailed description, 37  
enhanced rope definition, 37  
fat rope definition, 37  
illustrated I/O backplane slot mapping, 39  
PCI-X backplane functionality, 38  
SBA chip operation, 38, 39  
inspecting  
e-Link cable description, 42  
ejectors  
cell board, 110  
electrical specifications, 50  
electrostatic discharge, 59  
EMI panel  
installing, 111  
removing, 89  
cables, 112  
circuit boards, 112  
installation  
environmental requirements, 54  
equipment  
EMI panel, 111  
PDCA, 86  
returning, 71  
equipment footprint templates, 185  
external e-Link cable description, 42  
tools required for, 63  
visual inspection, 110  
intake air filter, 18  
interference  
F
facility guidelines  
computer room layout, 185  
equipment footprint templates, 185  
FEPS, 19  
communications, 59  
inventory check, 60  
IP address  
default values, 99  
LAN configuration screen, 100  
setting private and customer LAN, 99  
description, 20  
firmware  
ACPI interface, 44, 46  
EFI interface, 44, 46  
event IDs, 44, 46  
J
JET  
PAL interface, 44, 46  
POSSE shell, 44, 46  
SAL interface, 44, 46  
front end power supply, (see FEPS)  
front panel display, 179  
invoking the software, 108  
power cycling after usage, 108  
purpose for invoking, 108  
JTAG utility for scan test  
JUST, 108  
JUST  
G
JTAG utility for scan test, 108  
gateway address, 100  
global shared memory errors, 48  
Gold Book, 112  
K
kick plates  
attaching to cabinet, 109  
shown on cabinet, 109  
H
halfdome utility connector board, (see HUCB)  
hardware correctable errors, 48  
HKP LED, 179  
hot-swap oscillator, (see HSO)  
housekeeping power  
front panel display, 96  
HKP LED, 96  
turning on, 96, 178  
housekeeping power LED, 96, 179  
HSO  
detailed description, 28  
LED status indications, 28  
location on backplane, 29  
part of clock subsystem, 27  
HUCB  
L
LAN  
port 0, 100  
port 1, 100  
status, 100  
LED  
AC0 Present, 98, 180  
AC1 Present, 98, 180  
Attention, 179  
HKP (housekeeping), 96, 179  
Present, 96, 179  
leveling feet  
attaching, 79  
local power monitors, (see LPM)  
198 Index  
LPM, 19  
power up  
48 V dc enablement, 21  
M
5.3 V dc enablement, 21  
power on sequence for cabinets, 104  
processors  
m-Link cable description, 42  
MAC address, 100  
management processor, (see MP)  
moving the system, 72  
dual-die, 19  
ItaniumTM, 17  
MP  
mixing rules, 48  
detailed description, 24  
PA-RISC, 17  
displaying the customer LAN parameters, 100  
exiting the main menu, 101  
general description, 22  
R
ramp extensions, 65  
RCS  
invoking a partition console, 104  
invoking the virtual front panel, 103  
physical connection to the customer LAN, 98  
returning to the main menu, 101  
setting the customer LAN parameters, 100  
shown in system, 24  
detailed description, 28  
location on backplane, 29  
part of clock subsystem, 27  
redundant clock source module, (see RCS)  
repackaging checklist, 71  
returning equipment, 71  
routing I/O cables, 91  
viewing the virtual front panel screen, 103  
N
noise emission specifications, 56  
nPartition access errors, 48  
S
SBCH  
board revision information, 23  
part of MP, 24  
P
packing carton contents, 60  
PDCA, 19  
POST start, 21  
shown in system, 24  
4-wire voltage verification, 84  
5-wire voltage verification, 84  
ac breaker power on sequence, 179  
ac breakers, 96  
USB hub provisioning, 22  
server errors  
global shared memory, 48  
hardware correctable, 48  
nPartition access, 48  
installation, 86  
redundancy provision, 20  
unpacking, 71  
shipping dimensions and weights, 50  
signoff, customer, 112  
single board computer hub, (see SBCH)  
site of installation, 72  
site preparation verification, 60  
skins, attaching, 75  
wiring configurations, 71, 86  
plenum rated data cables, 20  
PM  
UGUY location, 22  
PM3  
space requirements  
BPS sensor monitor, 21  
functions performed, 23  
inlet air sensor monitor, 21  
post installation check, 112  
power  
computer room layout, 185  
equipment footprint templates, 185  
subnet mask, 100  
Superdome system  
air flow, 56  
DP rated power cables, 20  
housekeeping, 96  
computer room layout, 185  
history, 17  
turning on housekeeping, 178  
power dissipation, 54–55  
power distribution component assembly, (see PDCA)  
power monitor 3, (see PM3)  
power on  
I/O expansion cabinet, 18  
left cabinet, 18  
right cabinet, 18  
three cabinet configurations, 18  
Support Management Station  
private LAN IP address, 99  
private LAN port designations, 100  
switch fabric  
sequence, 21  
power options  
option 6, 51–52  
option 7, 51, 52  
general description, 27  
system backplane  
power requirements  
I/O expansion cabinet, 53  
system, 52  
48 V supply pinouts, 29  
description of power supply modules, 30  
functionality provided, 26  
power supply mounting screws, 66  
199  
general description, 26  
housekeeping supply pinouts, 29  
I2C bus distribution, 27  
I2C device addresses, 27  
location of power supply modules, 30  
monitor and control functions, 27  
power distribution, 29  
sustained total bandwidth, 26  
system clocks  
differences from sx1000 system, 24  
system specifications, 49  
T
temperature range  
IOX cabinet, 22  
Normal, 22  
OverTempHigh, 22  
OverTempLow, 22  
OverTempMid, 22  
temperature specifications, 54  
thermal report  
full configuration, 57  
minimum configuration, 57  
typical configuration, 57  
tilt indicator  
description, 61  
shown in diagram, 62  
U
UGUY  
comprised of, 23  
general description, 23  
part of the power subsystem, 19  
shown outside system, 23  
universal glob of utilities, (see UGUY)  
unpacking  
blower housings, 72  
blowers, 72  
pallet ramps, 65  
PDCA, 71  
system cabinet, 63  
V
VxWorks load, 21  
W
wrist strap usage, 59  
X
XBC  
connected to cell boards, 18  
detailed description, 26  
200 Index  

Hitachi C29 F880SN User Manual
Fujitsu MPB3021AT User Manual
Eizo EIZO FlexScan L367 User Manual
Dell Computer Monitor 2209WA User Manual
Asus P8h61 M Lx3 R2 0 Desktop Motherboard P8H61 M LX3 R2 0 User Manual
Asus Maximus Motherboard MAXIMUSVIGENEMICROATX User Manual
AOC Computer Monitor E2041S User Manual
Addonics Technologies AEDVDCOPPM4U User Manual
Adaptec Computer Hardware 44300 User Manual
Acer TCO99 User Manual