Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

Wednesday, April 13, 2011

IBM Mainframes at home




Introduction

I recently seized the opportunity to acquire a good-condition IBM zSeries 890 computer, which I knew was taken good care of (including careful de-installation).  This is a model that was sold from April of 2004 up until June of 2008.  It is a newer model of the series of computers that IBM initially released with the IBM System/360 back in 1964.  The instruction set and system features have had some changes and additions in that timeframe, but in the end, most user programs written for the S/360 Model 30 in 1964 still run without any changes on the current systems that are being sold, such as the IBM zEnterprise 196 (though changes have required updates to systems level software, such as operating systems).

Major landmarks are IBM S/360 introduction in 1964 (with a 24-bit address space), Virtual memory becoming standard on S/370 in 1972, 31-bit address space with S/370-XA in 1981, multiple CPU/SMP systems in 1973, LPAR (logical partitions) in 1980, S/390 with ESCON (10 and later 17 MByte/s fiber-based peripheral attachment), CMOS-based CPUs with the S/390 G3 in the 1990s, and 64-bit architecture in 2000 with the IBM z900.  More information about all of these is available via google.

System Details

The system is in a wide, heavy rack, about 30" wide and 76" tall, and about 1500lbs, according to IBM.  This is a typical weight and size for an IBM rack of equipment (similar to older IBM S/390 and RS/6000 SP systems).  Inside the rack, starting at the top are the rack-level power supply, processor and memory cabinet (CEC), and at the bottom, the I/O card enclosure.  On a set of fold-out arms on the front side of the machine are the Service Element (SE) laptops, which are sort of a service processor - they manage loading firmware into the system, turning power on and off, configuring the hypervisor for LPARs, accessing the system consoles, and other low-level system tasks.  They manage features such as Capacity Upgrade on Demand (CUoD), Capacity Backup, limiting the hardware to what you have actually paid for, and are serial number locked to the particular machine they're connected to.

My machine is a 2086-A04 (which is common across all z890 systems), and configured as a 2086-140, with two dual-port fiber Gigabit Ethernet adapters, two dual-port 2Gb Fiberchannel/FICON adapters, and two 15-port ESCON adapters.  In addition, the SE laptops connect to the outside world using a pair of 10/100 PCMCIA ethernet adapters.  Earlier systems used Token Ring instead... I'm happy to not have to deal with Token Ring for this anymore.

Central Electronics Complex - CPU and Memory

The -140 configuration means that there's 1 CP (Central Processor), which runs at speed level 4 (on a scale of 1 to 7), which can run any z890-compatible OS.  It also has one "IFL" (Integrated Facility for Linux), which is one processor core which runs at full speed (level 7), which is microcode locked to only running Linux, and doesn't count towards the speed rating of the machine.

IBM mainframes are given a speed rating, originally in MIPS, and now in MSU, which is used to price software that runs on the mainframe.  Mine is rated 110 MIPS, and 17 MSU. By comparison, a full speed CPU is rated 366 MIPS and 56 MSU, which is the speed of my IFL.  The z890 came in 28 different speed increments (1-4 CPUS x 1-7 speed ratings) from 26 MIPS / 4 MSU to 1365 MIPS / 208 MSU, to closely match the amount of system speed you required, which helps keep your software costs down.

Processor book, with memory removed

The z890 is actually nearly the same as a z990 system, with the exception that it has only one processor book (out of 4 that can fit into a one-rack z990 system), and only scales to 4 user-accessible cores (there is one additional core, called a "System Assist Processor" or SAP, which is dedicated to doing system I/O handling).  The z890 can scale up to 56 total processor cores.

System memory module - 8GB

The system has 8GB of RAM, which is a custom module that plugs into the processor "book".  The memory is available in 8GB increments up to 32GB of ram.  The 24GB option, as there is only one RAM slot, is actually implemented as a 32GB module with 8GB "turned off" by IBM.  As you will learn by working with these systems for a bit, IBM has an annoying habit of charging you to turn on parts of the system which you already own, but haven't paid to unlock yet.

STI cables and connectors

Each processor book (which is only one for the z890) has 8 "STI" (Self-Timed Interface) links, which connect the CPU and memory to I/O adapters.  In the z890, these links run at 2GBps (16Gbps).  There are 7 that you can use in a fully configured system for I/O cards, and you can also use the remaining one (or more) to network (couple) systems together to build a system that is more redundant.  With the apporpriate hardware, you can run these coupling links (but not at the full 2GBps) up to 100km, creating as IBM calls it, a "geographically-diverse" system, which can help provide for a disaster-recovery system link.

Peripherals

Two things that I appreciate about this system compared to past systems are the ethernet on its SE (which also lets me boot the system from an FTP server on its network), and SCSI over Fiberchannel support, which lets me use standard FC raid arrays as storage for a Linux system.

IBM 3174 Terminal Controllers and 3483 Terminal

In addition, the FICON can act as a system channel (like ESCON or Bus & Tag) to appropriate peripherals, at speeds of up to about 200MBps.  It also has ESCON, which is a more traditional connection, and allows up to 17MBps connection to peripherals.  Parallel Channel (aka S/370 Channel, or Bus & Tag) peripherals can be hooked up using a protocol converter such as the IBM 9034.  This will let me connect my 3480 cartridge-style Channel-Attached tape drives, 3174-21L 3270-style terminal controllers, and eventually 3420-8 vacuum-column 9-track tape drives, to the system.

IBM 9034 Escon to Bus & Tag Converter

To Be Continued...

Future posts will document installation and setup of Debian GNU/Linux, and peripherals.

Saturday, October 9, 2010

IBM p550 at home with Debian and Infiniband


It's time for yet another blog post on getting once expensive machines running at home.

Introduction

I'm working with an IBM pSeries machine called a p550 (specifically, a model 9113-550 in IBM speak). It was built in 2004, had a list price of some 10s of kilo-bucks new, has 4 x 1.5GHz POWER5, 8GB RAM, and runs AIX up through the most recent releases of AIX 7.

It has a built-in hypervisor and what IBM calls "LPAR" support, which is a mode of virtualization which gives you "Logical PARtitions" of the memory and CPUs in the machine, with a granularity of 1/10th of a CPU. LPAR support requires a desktop machine that IBM calls an HMC, or "Hardware Management Console", which breaks out all of the logical consoles on the machine, and allows you to configure resource allocation and things like virtual ethernet switches and virtual SCSI adapters. In addition, a piece of software for the machine called VIOS or "Virtual I/O Server" is required for LPAR mode if you want to share hardware adapters (eg, ethernet, SCSI or Fiberchannel adapters) between OSes. Since I have neither of those, I am just running the machine in "bare metal" mode, with only one OS instance.

For I/O, the system has a built-in SCSI raid, gigabit ethernet, a Service Processor which controls functions like power on the machine, 5 internal hot-plug PCI-X slots, and an external link that allows for more I/O trays with disk and PCI-X cards to be added. I have installed a Mellanox Infinihost Infiniband card, to hook up to my Infiniband fabric.

Making the Service Processor work for you

In addition to the serial console port, the system has a pair of ethernet ports, which are designed to connect to a system HMC, but which also allow https-based access to the service processor menus. By default, it will try to get an address via dhcp, or you can configure it through the serial port. The Service processor requires you to log in to do anything. I believe that the default username/password combination is admin/admin. That's what we had it set to on the machines at work.

To set the IP address, you need to navigate through the menus:
5. Network Services
1. Network Configuration
1. Configure interface Eth0
Then, chose either static or dynamic, and enter information as needed.

In order to get this to work for me, I had to use Firefox, and enable an SSL option, because while it uses https, it uses a somewhat insecure method of doing SSL that is disabled by default. To enable this, put "about:config" in the address bar, and change the option "security.ssl3.rsa_null_md5" to "true". Once you do that, you can get to the web version of the service processor menus (ASPI in IBM-speak) at https://1.2.3.4 (replacing 1.2.3.4 with the IP address you set above).

One additional thing you will probably want to set up is "Serial Port Snoop" under System Service Aids -> Serial Port Snoop. Setting a "Snoop String" will all you to enter a string through the serial console to force reboot the machine if it locks up, or you do something wrong while booting, and the console isn't set to the right place.

Installing Debian

I net-booted the installer. To do this, set up the host in dhcpd.conf with an entry like this:

host p550 {
hardware ethernet 00:02:55:df:d5:dd;
fixed-address p550.blah;
next-server storage.blah;
filename "/tftpboot/debian-squeeze-ppc64-vmlinuz-chrp.initrd";
}

Boot the machine into OpenFirmware (hit "8" at the firmware "IBM IBM IBM IBM ..." screen), and net-boot from there:

0> boot net console=hvsi0

If you don't boot with the right args from openfirmware, you won't get a working console when you boot into the installer. That's where the "serial port snoop" option from the service processor comes in handy.

Once you get to the end of the installer, you will need to do some magic to get the bootloader (yaboot) installed. Hopefully, the Debian people will get some of this sorted out before the release of Squeeze. Tell the installer that you want a shell, then do this:

# mount --bind /dev /target/dev
# chroot /target
# mount -t proc proc proc
# yabootconfig
# ybin

Upgrading firmware

Debian doesn't include binary update_flash in its powerpc-utils package. Download the latest binary release in RPM format.

Convert that to an rpm with Alien (apt-get install alien if you don't have it):

# alien powerpc-utils-1.2.3-0.ppc.rpm

then

# apt-get remove powerpc-ibm-utils powerpc-utils
# dpkg -i powerpc-utils_1.2.3-1_powerpc.deb

Now, you can download a new flash image from IBM. Once you get it, use alien to convert and unpack the rpm, and do "update_flash ./tmp/fwupdate/01SF240_403_382", where 01SF240_403_382 is the flash image name from the RPM you downloaded. When you reboot the system, Linux will update the system flash just before rebooting.

Infiniband and beyond

I had some problems initially getting Infiniband set up and going. I'm using a Topspin SDR Infiniband adapter, which is basically a stock Mellanox InfiniHost . It seems that the hypervisor on the machine wasn't allocating all of the resources that the card was asking for.

After some discussion on the linuxppc-dev mailing list, it was pointed out that there are certain slots in the machine which the system calls "super slots", and which the firmware is willing to allocate more resources than a typical PCI-X card requests. This Redbook (PDF) on IBM's redbook site details Infiniband usage on pSeries systems, Section 3.4.3 indicates which slots you may install an infiniband adapter into on certain machines. On a p550, these are slots C2 and C5. I had plugged my IB adapter into slot C1, which is why I was having problems.

After getting it into the slot, it was just a matter of getting the right drivers loaded on the host OS. In order to use IP over Infiniband, you'll want the ib_ipoib module. To use RDMA and the Verbs interface, you'll want ib_umad and ib_uverbs modules to be loaded. At this point, it basically acts like a typical Linux system with Infiniband, just with more I/O bandwidth than you can get out of a typical PCI-X based system.

What next?

Setting up an HMC, and playing around with virtualization on the machine sounds like it could be a good time.