Tuesday, August 10, 2010

The HTC Evo and a VT125

This follows on from my previous blog post about booting ancient OSes on emulators for ancient computers on your Android phone.

Since I still have an extensive collection of vintage DEC hardware, I decided to extend what I had been working on by connecting some vintage hardware up to the emulated system.

The first thing I had at hand was a DEC VT125 terminal - a close relative of the VT100, which includes some added graphics support. A quick power-up verified that it work. Now, to get it to talk to my phone's emulated VAX.

Now, I didn't have any good way of connecting a serial port directly to my phone, but SIMH does support a telnet connection to the emulator's console by doing a command like this (2301 is the TCP port to accept connections on):

sim> set console telnet=2301

Now, to create a telnet session from my VT125, I grabbed a Xyplex MAXserver 1640 from my bastement. These are similar to a vintage DECserver, but support telnet (and many other things) in addition to the usual DEC-specific LAT protocol. Basically, I can hook a terminal up to this, and use it to telnet to a host. It also works the other way around, so that I can hook a system's serial port up to it (such as a console port), use telnet to connect to the MAXserver, and connect to the serial console on the physical machine. This later setup is something we do with systems at work, and is very commonly done, as opposed to connecting serial terminals up to network-attached hosts, like I am trying to do.

For the purposes of this post, I won't go into detail on how to set up a MAXserver, but I basically placed a boot image on a tftp server, told the MAXserver to boot from that image, gave it an IP address, and told it to reset itself to default settings.

To connect the MAXserver to the VT125, I used a RJ45 serial cable (a "roll over" cable) and DB25 to RJ45 adapter, which has the same pinout as a Cisco RJ45 to DB25 DTE serial cable. Ethernet then connects the MAXserver to my home network.


Next, I set my phone to connect over WiFi to my home network, and noted its IP address. After turning on my VT125, booting the MAXserver, and starting up the emulated VAX on my phone. From there, I told the MAXserver to connect to my emulated VAX console:

Xyplex> connect 172.27.3.150:2301

Now, just boot the emulated VAX, and enjoy!


I have more pictures up on my flickr account.

Saturday, August 7, 2010

Compiling SIMH emulators for Android


In order to make this process easier on my phone, I used a rooted firmware. It will take some more effort to get this to work as a packaged application.
Setting up the development environment.
To get started, you'll need a working native C compiler for Android. After a lot of trial and error, I ended up discovering what I had to do. I used an amd64 architecture version of Debian GNU/Linux, Ubuntu works the same way. Using a Linux host to compile the code if not essential will make your life a lot easier. Follow the official instructions to download the source to Android using git, and then build it.

Along with doing that, I tested this on the Android emulator, which is a part of what you just downloaded and built, under out/host/linux-x86/bin. Put that directory in your path, and run the "android" command, create a virtual platform, and boot it. From the command line, if you built a virtual device named "Android21", for example, you'll want to run "emulator -avd Android21 -shell" so that you can get a shell on the virtual device. To copy files over to the image, the easiest method that I've found is to shut down the emulator, mount the virtual sdcard image (for these examples, I'm using "Android21" as the virtual device name):

$ sudo mount -o loop ~/.android/avd/Android21.avd/sdcard.img /mnt
$ sudo cp whatever /mnt
$ sudo umount /mnt

In addition, you will want the "agcc" script to make it easier to compile things. Download it from here. I modified my copy to use gcc-4.4.0 instead of gcc-4.2.1. You will need to add the location of arm-eabi-gcc to your path, and change all references of "4.2.1" in the agcc script to "4.4.0" and add that to your path. arm-eabi-gcc can be found under the prebuilt/linux-x86/toolchain/arm-eabi-4.4.0/bin path of where you built the android sources.

Fixing the Android SDK and SIMH makefile

The current version of the Android libc (bionic) headers has a problem compiling when bionic/libc/kernel/arch-arm/asm/byteorder.h is included. In order to make the file compile, comment out these lines, lines 22-27 in my copy:

/*#ifndef __thumb__
* if (!__builtin_constant_p(x)) {
*
* asm ("eor\t%0, %1, %1, ror #16" : "=r" (t) : "r" (x));
* } else
*#endif */

Once you do that, you will need to need to download the SIMH sources. Unpack them, and modify the the makefile in the top directory, to make the following changes:

On line 12, remove -lrt from OS_CCDEFS, so that it reads:
OS_CCDEFS = -lm -D_GNU_SOURCE

Change all references of "gcc" to "agcc".

Only some of the simulators actually compile, and I haven't tried to compile network support for any of it. In order to get network support, you would need to compile libpcap as well. For now, that is left as an exercise for the reader. :)

After those changes, I just did make vax vax780 to build the MicroVAX 3900 and VAX-11/780 simulators, which can be copied over to your phone or emulator. You will probably get a bunch of warnings about the use of variable-size enums versus 32-bit enums. They seem to be harmless, and I'm pretty sure that you can ignore those warnings.

If you're too lazy to compile it

If you don't feel like spending the time to set up a development environment, and compile SIMH by yourself, I have a pre-compiled version for Android available. I used SIMH v3.8-1, which is the newest release as of this posting. You can get a pre-compiled copy of the Android emulator to test on from the pre-compiled Android SDK. After you download and unpack that, you will need to put the tools directory inside the sdk into your path.

Preparing your phone and copying things over

You will need a rooted phone to make this work easily. Rooting your phone is left as an exercise for the reader. Once you have root permissions, you will need to USB debugging on your phone. On my Evo, it's under Settings -> Applications -> Development -> USB debugging.

You will need an application that will work as a terminal emulator on your phone. You can use "adb shell" from the Android SDK, which will give you a shell from your phone on your computer. To run completely hosted from the phone, use something like ConnectBot.

From this point you can copy files necessary for the emulator to your phone. I copied the necessary files all to my phone's SD card:

$ adb shell mkdir /sdcard/simh
$ adb push BIN/vax /sdcard/simh
$ adb push VAX/ka655x.bin /sdcard/simh
...

Now, use adb shell to do a few things on the phone itself. Pay attention to what your phone is saying, as the first time you try to "su" on your phone, it may pop up a dialog asking if this is ok. Whether or not you see that will depend on exactly how you rooted your phone. If "adb shell" gives you a "#" prompt straight away, you don't need to use su, as you're already root.

$ adb shell
$ su -
# mkdir /data/simh
# cat /sdcard/simh/vax
# chmod 755 /sdcard/simh/vax

It is necessary to put any executables on the internal storage (eg in /data/simh like I did), because you cannot directly execute binaries from the sdcard, at least in Android 2.1.

Running a SIMH emulator

Once you've done that, as root on the phone, do something like:

# cd /sdcard/simh
# /data/simh/vax

And then just use SIMH as you normally would.

Connectbot allows for multiple simultaneous sessions, so you could do "set console telnet=2300" inside the emulator, and then open another telnet session in connectbot to 127.0.0.1:2300 to connect to a separate console. Connectbot simulates "screen" as a terminal emulator, which seems to do an adequate VT100 emulation for most things. The most I've tested it so far it running /usr/games/worms from 4.3BSD, after "setenv TERM vt100". If you have your device on a WiFi network, you could even use another machine to telnet into SIMH and be the console, or another emulated serial terminal on the system.

That's it! If you don't know what to do from here, take a look around the SIMH site, you can run things like ancient versions of UNIX, OpenVMS through HP's hobbyist program, NetBSD, or play with the other emulators and other OSes. Where you go from here is up to you.

EDIT Aug 8, 2010:

I forgot to add the pictures that I have taken of this. Check them out on flickr. I also now added the picture to the top.

Friday, August 6, 2010

Back again

After a bit of a hiatus from posts, I'm back again.

I've finished a big hurdle in the start of downsizing and creating focus in my computer and interesting technology collection, and after finally buying a house, I have completely finished moving out of a ~2500 sq ft warehouse that I was using to store my collection. At its peak, it was stacked high with equipment, and paths to navigate through the space were sometimes non-existent.

I have since resolved to limit the size of my collection, and thus the number of projects that I want to do. As a direct result of my massive work in downsizing (with critical help from friends for some larger items and a push to finish up towards the end of July - but otherwise mostly done by myself over the January through July of 2010), I now have time to work on projects instead of just spending all of my time moving things back and forth.

I replaced my old iPhone 1.0 with an HTC Evo, running Android, back in June when they came out, and have been exceedingly happy with it. As a direct result of how easy it is to hack and develop on, I have started some projects involving running computer simulators on it. Currently, I have the set of VAX emulators (VAXstation 3900 and VAX-11/780) from Bob Supnik's Simh emulator collection running both 4.3BSD and OpenVMS, with a minimal amount of work.

I have posted pictures of 4.3BSD running both on the Android emulator, and my phone on my Flickr account.

My next post will describe what I had to do to get Simh to compile and run on my phone, and after playing with that, my next target is the Hercules IBM Mainframe emulator.

Saturday, February 28, 2009

New blog

I've created a new blog to keep track of the progress we're making at work on our new Coates cluster that we're putting in this spring. I know I haven't posted here in a while, but I expect to spend more time posting to that one (and I want to keep the content separate from my personal blog anyways).

Thursday, June 26, 2008

A supercomputer reborn




After my last entry, I have spent a bit of time poking at Linux kernel versions, and found what I had created last time I tried to do a Debian install on one of the SP high nodes: a custom compiled version of the 2.6.8 kernel, with the Debian Sarge installer thrown in as an initrd. To my amazement, it actually booted, and I had a system running a somewhat useful version of Linux again, instead of AIX.

Now, my next goal is to get this system to run the latest kernel release, and up-to-date software. I managed to locate my set of 32GB of ram for the system (arranged as 128, 256MB modules!), plug that in, and end up with a system that has more memory than disk space, and has more memory used when it boots (from things like page tables) than there is on the disk. Right now, I seem to be having trouble booting any kernels past 2.6.8 on the system, ending up with the kernel causing bad page faults while it tries to set up the MPIC on the system (that's the interrupt controller for PowerPC systems).

After I get both of those in place, I am hoping to take a spare fiber channel card, the SCSI/FC target driver in the current Linux kernel, and turn the system into a 32GB or so solid-state disk. I'm thinking that this would be a good concept to test, as it's way cheaper than getting a modern sold-state disk from Texas Memory Systems. In fact, you can pick up a new commodity x86 system with 128 or 256GB of ram and a fiber channel card or two for significantly less than a commercial solid state disk solution.

Sunday, June 22, 2008

The death of a supercomputer



In Feburary of 2005, Purdue was given the hardware that used to be Blue Horizon, the San Diego Supercomputing Center's old IBM RS/6000 SP supercomputer, which was purchased through funding from NSF. When it was new in 2000, it placed 8th on the list of the top 500 supercomputers in the world. When Purdue acquired it, the system was well off the bottom of the chart. The system was a set of 144, 8-processor 375MHz POWER3 "SP high node" (9076-N81) systems with 4GB of ram each.

The people in charge of my department, named the Rosen Center for Advanced Computing had decided that the price of the system (free plus shipping) was good enough to send two of our hardware guys out to condense the system down, maxing out the systems, and going from two 4-processor modules to four modules (16 processors) and from 4GB to 16GB of memory per system. At the time, we had the free power and floorspace, so it seemed like it could be a reasonable idea, and the systems were still computationally useful for a year or so after we got them. We condensed the system down from nearly 40 racks of machines to just 10, each one somewhere around twice as fast as my dual-G5 in doing a Linux kernel compile (one of my standard metrics for testing speed; I did this under a Debian Linux install on both systems).

Unfortunately. the amount of time and effort necessary to set up an IBM SP and AIX to be a useful compute resource is non-trivial. Adding to the problem, we lost two of our senior systems administrators in the Summer of 2005, one of which was our AIX guru and had set up our existing IBM SP systems. I had played around a bit with our testing SP cluster, including reinstalling it, and discovered how much effort is required to make a useful system. Just the software necessary to do a base OS install on an SP has an install manual that's over an inch thick.

So, by Summer of 2006, our management finally decided that we should get rid of the system, which meant that some of it would be coming home with me. So, I purchased the system from Purdue's surplus store for somewhere around $500.

The first rack of nodes (there's four nodes per rack) went onto ebay, and sold to a researcher in China for about enough money to pay for my endeavor. I shipped one more rack to a computer collector in New Jersey, and a Saturday evening after the annual Vintage Computer Festival/Midwest show that I ran, one more rack of systems was loaded into the back of a Toyota minivan and headed up with a lawnmower to Ontario, Canada. Later, a second rack would go to Canada, a few would get scrapped for parts, some nodes were stripped for ram to sell to a reseller, another rack to a company in Minnesota, and the rest sat around in my warehouse until they got scrapped, or used for parts. All that remain are two of the original nodes, and a few boxes of boards, heat sinks, memory and other parts that need to go a scrapper, so that they can be recycled.

I'm still keeping the two nodes, and 8 or so SP thin nodes (9076-270) , partly because the POWER architecture is neat, and partly because a machine with 16GB of ram in it is still kinda pricy. Plus, in a bit more than a month, we'll be retiring our remaining SP system, which has memory modules that can push my two nodes to 64GB each. Sure, you can buy faster machines with the same amount of memory, which are smaller and use less power, from people like Sun, but I can also pay for a lot of power with the amount of money that one new machine would cost.

I've actually managed to get Debian Linux running on them; they don't work with many kernel version, but a 2.6.8 kernel that came with a past Debian installer seemed to work OK with them. I'm trying to revive the nodes I have, but I don't seem to be able to acquire the bootable installer anymore from the usual Debian places, and I'm not sure if I have it archived off somewhere accessible. Still, I should be able to take an installed copy running on a different machine, clone it and rebuild and install the correct kernel version, and boot that on the system. I just haven't had enough time or round tuits to do that yet.

So it seems, even in the death of a supercomputer, the machine still lives on, dissected, and disseminated to other countries, providing what help it can to further science, maybe just become an interesting conversation piece, or even become recycled into parts for tomorrow's supercomputing hardware.

Friday, June 20, 2008

"My" SiCortex SC5832


Well, I actually just run it, my employer, Purdue actually owns it for now. But, in 5 years or so, maybe I'll get a chance to buy it for my own enjoyment, just like other things I've managed to collect.

It is a fun little MIPS processor cluster in a box, the one we have is populated with 540 nodes, each with 6, 64-bit MIPS cores and 8GB of ram. The cores are kinda poky (500MHz), but each node CPU chip (with 6 cores, a PCI-E controller, two DDR2 memory controllers, and the high-speed interconnect fabric switch) draws a whole 600mW. The system draws less than 11kW as we have it configured now, and looks quite pretty. That's approximately the same amount of power that a single rack of 24 of our new "steele" cluster nodes uses (which are Dell 1950s with dual, quad-core Xeon E5410 CPUs and 16 or 32GB ram each).

We got the system as a "green" computing initiative that we've been working on, as our new President seems quite interested in doing green things. It is also one of the few things we have that's not a commodity x86 cluster for HPC. We got this just after replacing three of our old compute clusters (dual processor, single core things) with a new, dual-socket quad-core cluster, also in the name of green computing. Hopefully this will stave off our need for a new datacenter, at least another year or two.

There is some home that the system will help revive an old project, started by the late David Moffett, called the "High Performance Classroom"; allowing students to learn about parallel computing though the use of dedicated compute time on the system. The idea was suggested by Matt Reilly of SiCortex when he came to visit Purdue and talk about the system we have. This is one of the few machines that my work owns that weren't purchased with funds from users or specific grants, and which we actually have some leeway at specifying at how the system gets used.

Our other discretionary resource is a cluster of 3-year old desktop computers, that gets updated ever year; old machines from student computer labs get rotated out of use after 3 years, and into a cluster of machines that's about 500 machines in size. It's the only dedicated resource that we current have running that anyone on campus could get an account to run their own jobs on. Some of these machines are pictured in the background of the picture of the SiCortex above.

In any case, hopefully our higher-up management decides that the SiCortex is a good enough machine to keep around (we got in on a sort-of rent-to-own plan), and use to help further one of the fundamental goals of Purdue - not the Research that keeps bringing in money, and fancy awards and prizes, but Teaching Students, which sometimes seems like it gets forgotten behind the other, gold-plated awards from the NSF and other organizations for research.