This is the Age of the Train… again?

Inter-City 125 at YorkHaving been interested in the railways since the mid-1970s it was with some interest I saw an article on the BBC News web site today about the replacement of the old “Inter-City 125″, or “High Speed Train”, trains with new Hitachi units.

Now, most of the article was fine but there was one sentence in particular which was down right wrong in the caption for a photo, and that was, “With its familiar sloping nose, the 125 symbolised a new era of clean lines and high technology on a network that had been underfunded and getting tatty for decades.” This is not how things were at the time and here’s why:

The HST, or “Inter-City 125″ as it was marketed, came into service in 1977 on the Great Western Mainline between Paddington and Penzance. This was less than ten years after the last of the modernisation era diesels came into service and only a little after ten years after the West Coast Mainline had been completely upgraded and electrified. Indeed, no locomotive on the system was more than 17 years old, all freight rolling stock had been replaced, as had much of the coaches and all the local trains, except for the ones on the Southern Region’s three rail electric system.

But there’s to the mistake than even these facts and to illustrate this we need to go back to the origins of the HST itself… And it’s also a good story.

In the late 1960s, even after the upgrade of the West Coast Mainline to full electrification the speeds of the trains and hence the journey times hadn’t changed a great deal. Abroad in Japan you had the bullet trains and in France SNCF had been breaking records using their powerful electric locomotives (but only on specially built high-speed lines). It was seen by the British Rail Board that something needed to be done.

British rail stock, even the fastest, was limited to 100mph running even on the best tracks. This was due to a combination of the network being a general purpose system running both passenger and freight traffic with a meandering track layout imposed by the Victorian builders’ whims and the wheel and suspension technology of the trains. So, it was decided to start a project for a train which could overcome all of these problems and run at 155mph. And so the Advance Passenger Train (APT) project was born.

ATP-E (Experimental)

BR decided that the best way of building a new, high technology train was from the ground up. They didn’t want any legacy rail rolling stock builders trying to refine the old designs, they wanted new thinking. So, instead they brought in people from the aero industry to look at the problem from a new perspective.

By 1972, after a great deal of research into wheel design, suspension systems, aerodynamics and coach tilting the first test bed train set arrived on the tracks, the APT-E (experimental). This was a combination of two power cars being powered by gas turbine engines which generated electricity to power the traction motors which turned the wheels plus a couple of “passenger” cars containing all the test equipment. Initially this ran only on the Old Dalby test track near Derby but later visited most of the rail network.

The APT-E was the first actively tilting train in the world. Previous attempts at tilting trains in places such as Italy used a pendulum system but these didn’t work very well. (This is where the name “pendulino” comes from, later used for the current tilting trains on the West Coast Mainline, but more of that later…) The APT-E used hydraulic actuators to rotate the coaches so that the centre of gravity moved inwards on bends, meaning that they could be taken faster without the risk of the train toppling over. It also had the effect of minimising the sideways load on the passengers but this was not it’s primary purpose.

However, even by the time the APT-E made its way onto the rails some within British Rail could see that the APT project was going to take a lot longer than first envisaged and the system needed a stop-gap. This wasn’t a universal view within the British Rail Board and it took a rogue element to start a “skunk works” project using a design team from the traditional rail rolling stock manufacturers. This was the “High Speed Train” (HST) project.

HST PrototypeInitially, the HST project was hidden. The group took a great deal of the fruits of the early research work from the APT project and adapted them to a more traditional railway design. The most important section of the APT design they used was the bogie/suspension/axle set innovation. Previous designs had major problems with oscillating instability above 100mph causing the flanges on the wheels to start bouncing off the rails as the train went along. Not only was this dangerous but also caused massive rail and wheel wear and the forces damaged the bogie frames. The APT researchers had managed to solve these problems. The other innovations included were the use of monocoque structural design and the early aerodynamic work.

By the time the British Rail Board cottoned on that they needed a stop-gap before the APT came into service there was already a prototype HST almost complete, which took to the rails in 1974/75.

The main difference you will notice between this prototype, other than the natty “inverse Inter-City” colour scheme, is the cab design. This Class 252 unit has lights directly under the window and buffers. (Some of the later units had buffers retro-fitted in the 2000’s for some reason.) Other than that and a lower power output, the prototype was almost identical to the first production units.

The go-ahead for production happened pretty quickly after that and the first sets of power cars and Mark III coaches started replacing the expresses on the Great Western Mainline in 1977, these were the Class 253. Two years later the second production run, which had slightly uprated engines and the guards van added, started taking over on the East Coast Mainline, these were the Class 254. The rest is history.

APT Prototype at CarlisleAs for the APT, the project continued with the first few prototype sets of power cars and coaches being rolled out in 1979. Unfortunately, due to political pressure they were forced into passenger service almost immediately and before they’d been used for the purpose they were designed, testing and refining the systems, especially the tilting mechanism which had been redesigned from the ground up to be failsafe, unlike the APT-E’s. This combined with a disastrous P.R. folly of a launch involving plying journalists with as much booze as they could drink and pandering to a minor celebrity (Isla St.Clair) made for a media storm and a political backlash the project never recovered from. (At the time the £1m “wasted” on the APT project was declared a scandal. The French would spend this much on 1km of TGV track in the late 1970s!) Soon after all the bugs in the APT trains and tilting systems had been sorted out in 1985, the trains were withdrawn from service and the patents sold to foreign companies, such as the one which went on to build the Pendolino trains for the West Coast Mainline.

 

2014 – The year of retro-computing.

For me, other than health issues which I won’t say any more about here, 2014 has been a year dominated by retro-computing sparked off by the call from the Museum of the History of Science in Oxford for exhibits for an exhibition they were setting up.

The call went out in January, a little before my previous post (this is called “irregular” for a reason), and I responded by offering the use of my archive of machines. In the end they took up the offer of a Sinclair QL and a BBC Micro.

Of course, this meant that I had to get the machines down from the loft to check them out, clean them and generally prepare them for the exhibition, which was going to start in May.

The machines themselves proved to be in fine fettle but the floppy disk drives needed a bit of coaxing. The worst problem was that the lubricant used in the 1980s on the drives had mostly dried out to a sticky goo, the opposite of a lubricant. This took a little while to sort out. Another issue was getting a way to display the computers’ output.

My Sinclair QL showing the appalling echoing and smearing on the TV

My Sinclair QL showing the appalling echoing and smearing on the TV

The LCD TV I used in my spare room as a monitor for my server does have an analogue TV input, along with HDMI, SCART, VGA etc. so it shouldn’t be a major problem. However, it’s obvious that the company Dixons/Curry’s chose to build their Sandstrom TVs never actually tested the analogue circuitry very well. Displaying anything with a defined, high-contrast signal produces at best multiple echoes of the point/line across the screen or, at worst, a smeared bright line to the left of the pixels. Not exactly useful for anything other than a quick test.

This problem induced me to have to build new SCART video cables for the machines. A fiddly job at the best of times but not helped by the TV’s insistence in only displaying composite video through the SCART connector unless one pin is energised with 3 volts and with no way of overriding this in the menu system. Quite a pain when the computers required to connect don’t produce this voltage (and it’s not available from the TV’s SCART either). So, a battery box had to be installed and velcroed. What a pain.

Anyway, by the time Easter came I had everything ready and delivery to the museum was planned after I got back from Cornwall. Unfortunately my health intervened and I was stuck down South for rather longer than expected, so missing the deadline for delivery. The museum managed to find a BBC Micro from someone else, along with my friend, Janet’s Beeb so they could set up without my stuff initially.

Thankfully I got back to Oxford just before the exhibition was to begin, so the team from the museum rocked up in a van and collected my QL, screen and floppy and they were deployed just in time for the opening.

Unfortunately I was still too ill to go into town so a friend took a picture of the set-up for me. I was so glad everything had turned out OK.

The Sinclair QL and BBC Micro doing public duty in the "Geek is Good" exhibition. (Sorry for the camera shake.)

The Sinclair QL and BBC Micro doing public duty in the “Geek is Good” exhibition.
(Sorry for the camera shake.)

A few weeks later, after I’d become well enough, I travelled into town myself and took a look at the exhibition, “Geek is Good.” and realised that the materials in the hands-on display area were rather basic and used an arcane BASIC programming example which was really dull and, well, very 1970s.

This spurred me into action. Being too unwell to go into work did allow me some time to use the energy I had to create a set of three programs for the BBC and the QL, exact equivalents in both BASIC dialects, and a crib sheet for program entry, explaining how to edit lines and simplify the typing in using the “AUTO” command. Far better.

All this action at the museum also inspired me to “play” with the other machines in my collection, discovering in the process a few of them starting to die. For example, the Atari TT030 which seems to have developed a floppy controller fault, and the Acorn A4000 ARM machine where the rechargeable  motherboard battery has burst, corroding the tracks and then the power supply died after I fixed this. These are really annoying failures as they’re the rarest of the machines.

The ethernetted Sinclair ZX Spectrums in the basement of the Museum of the History of Science for the "Geek Out!" event.

The ethernetted Sinclair ZX Spectrums in the basement of the Museum of the History of Science for the “Geek Out!” event.

Anyway, I’ve had great fun with all this culminating yesterday with the Museum of the History of Science’s “Geek Out!” event, closing the year with ethernet networked ZX Sepctrums in the basement running games served from a MacBook followed by playing a symphony and BBC Micros in the upper gallery.

The display in the Upper Gallery of the Museum of the History of Science.

The display in the Upper Gallery of the Museum of the History of Science.

I’ve been testing and checking a second BBC Micro for the display all week, duplicating floppy disks of games all ready for yesterday morning. After arriving before 9am with the kit I took it up stairs and attempted to get it working. Hampered by not having the correct monitor cable I soldiered ahead but found that the floppy disk would no-longer read disks. Worse, the other BBC Micro wouldn’t do so either. Even after swapping drives etc. between the machines nothing worked!

To save the day I asked Scott, the exhibition organiser, to get my Sinclair QL out of the store along with the floppy drive and a copy of “Arcanoid II”. The Sinclair saved the day!

A BBC Micro with a board allowing an SD card to be used as a floppy disk drive and my Sinclair QL play games.

A BBC Micro with a board allowing an SD card to be used as a floppy disk drive and my Sinclair QL play games.

Despite only having two games machines plus a BBC Micro sitting there for people to program upon the day went really well. The crowds were really interested and the kids were having a ball. Surprisingly the BBC Micro set up for programming was as popular as the machines running games. One late teenager, who I think was South American, was fascinated with programming and asked where he might be able to get a BBC Micro. Also, a group of Italian Computer Science students found the machine highly interesting and wondered at how much could be done with so little coding. I think they may be searching for BBC Micro emulators now!

And so, the end of the event came. I didn’t realise how tiring the day had been until I got home. I was shattered and almost fell asleep eating my dinner. Still, it was a good day.

And now, the computers are back in their home in the loft. I’ll probably get the Beebs down one day to try to diagnose the floppy problem but probably not this year. I’ve got other things to do, such as play the sequel to the 1984 smash hit on the BBC Micro, “Elite: Dangerous”!

Docking at a Coriolis station in the original Elite on a BBC Micro and in Elite:Dangerous on a PC.

Docking at a Coriolis station in the original Elite on a BBC Micro and in Elite:Dangerous on a PC.

Right on Commanders!

 

It’s been 30 years since the announcement of an important computing product…

and I’m not talking about the Macintosh.

QL Launch Macintosh Launch

In the news at the moment there’s been a number of stories about the launch 30 years ago of the Apple Macintosh with its flash media event and slick marketing “Big Brother” advert. However, two weeks previously in a lower key event a 3rd of the planet away in a London hotel there was another event, another launch of another computer which is rather less well known but was a breakthrough in many ways and the stepping stone to other, greater things.

QL-Flyer-1984-Inside

The inside of the original flyer advertisement distributed inside magazines in January 1984.

1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimped1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimped1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimpedNow the Sinclair QL was in many ways a flawed design, mostly due to some really silly design decisions such as using the 4 bit Intel co-processor to do keyboard input, sound output and serial port input. This poor, underpowered chip could just about do one of those jobs at a time but not two, which meant that if you played a sound you couldn’t read the keyboard properly and you certainly couldn’t accept any data on either of the serial ports.

However, this ignores what it did provide. It was the first “affordable” 16 bit computer system with a fully pre-emptive multitasking, modular operating system. It may have been aimed at the business consumer, a boat long sailed by this point, but it found a niche in the programming community.

A Mac Plus I rescued.

A Mac Plus I rescued.

Now, you may be screaming by this point, “But the Macintosh was far more influential and it’s still here!” Well, I’d agree that the Mac did bring a huge leap forward in usability and design, for a price. However, the machines which are sold as Macs now have very little to do with that cream box with a handle and a screen launched all that time ago. MacOS now is not a descendant of that original Mac operating system at all, it’s a direct descendant of NeXTstep, the BSD UNIX derived OS running on the NeXT Cube, from Steve Jobs’ other company. Even Postscript and the other innovations didn’t come along with the first release of MacOS, they happened later when the Laserwriter was created. It should also be remembered that the Macintosh was not the first computer marketed which had a WIMP environment and a mouse, there was the Apple Lisa before it. In many ways the Macintosh was the Lisa-Lite and most of the launch applications were quick ports of the Lisa applications.

On the matter of cost, when the Macintosh was released it may have been affordable to a few in the USA but it was well beyond anything the normal person in the UK could afford. If you could find an Apple dealer the price of a Macintosh started at around £1300 and went up steeply if you wanted to actually do anything. This is why they were so rare. The first one I saw was in 1988 in the Pi Magazine office, University College London.

On the other hand, the Sinclair QL was launched at £399. This was still a huge outlay, being half to two thirds the monthly wage of a normal person, but at least you could use your TV as a display, it came with applications and, if you had a printer, could usually be cheaply hooked up to it. Sure, you could spend a whole lot more on a monitor and a printer but you didn’t need to.

Now, in many ways the QL was a bit of a flop, especially if you look at it relative to the ZX Spectrum. However, those machines which did go out there had a remarkably high impact in the longer term, even if it wasn’t at all visible.

Without the QL would there have been Linux? Linus Torvalds cut his teeth on one, got a little frustrated by the restriction and decided to write his own operating system. QDOS, for all its advanced features and modularity is no UNIX and it was written in a hurry, I can see how Linus may have seen the problems, but it was, in the end his muse and impetus.

The QL was also the development platform for the first version of AmigaDOS. Without the QL the Commodore Amiga would have been a very different machine to use. Metacomco in Bristol was contracted to build an operating system for the fledgling machine and because they had been early to support the QL, building compilers for it, they had the M68000 expertise and the tools for the job. AmigaDOS itself was a re-implementation of Tripos, an operating system developed in Cambridge, UK. and written in BCPL. Seeing as Metacomco already had their own BCPL compiler for the QL it was a perfect match. Later versions of the OS were re-written (mostly because Metacomco weren’t actually THAT good coders all told).

Anyway, I’d like to wish a very happy birthday to both the Sinclair QL and the Apple Macintosh. You both advanced computing in your own little ways. Both still have active communities (though the QL’s is “somewhat” smaller). May your legacy go on another 30 years.

The Sinclair QL I'm currently starting to prepare for an exhibition.

The Sinclair QL I’m currently starting to prepare for an exhibition.

I’m currently preparing one of my two QLs (the one I rescued from being recycled by the Oxford University Physical Chemistry department) as a hands-on display in an exhibition at the Oxford Museum of the History of Science which will be happening later this year.

By the way, if you have a QL in need of a few spare parts, take a look at “Sell My Retro”.

systemd is pants!

OK, let’s get this out in the open… systemd is *PANTS*

Phew! That’s better.

I’ve just spent most of the day trying to get a Linux system to reliably mount a disk attached via iSCSI over ethernet at boot time and, more importantly, to reliably get it to be unmounted at shutdown before the network rug is pulled from beneath it.

Now, in the days of the init scripts it was pretty easy to stuff a script in-between the networking coming up and nfs-server starting. It was also dead easy to make sure that your script ran at shutdown before the iSCSI and networking were closed down as well.

Now, with systemd the first part is more tricky as the program tries to do everything at once. If you want to have a native systemd service which starts before nfs-server then you have to modify that service description too. You might as well just have an init script which runs last which shuts down nfs-server before mounting the iSCSI disk and then starts it again when it finishes.

Now, it gets worse when the system is shutting down. Oh yes!

You see, systemd always tries to do things in a hurry. It seems that the design philosophy was better to do things quickly rather than correctly, and this is especially true at shutdown.

In a discussion thread on the systemd-devel mailing list titled “[systemd-devel] systemd and LSB Required/Should-Stop” it’s stated by Lennart Poettering:

On Fri, 24.06.11 14:04, Ville Skyttä (ville.skytta at iki.fi) wrote:

> Hello,
> 
> Am I right that systemd does currently have no support for the LSB
> Required-Stop and Should-Stop keywords?  If yes, are there plans to
> support them in the future?

That is true, we do not support this right now. In systemd the shutdown
order is always the inverse startup order of two services. I am not
entirely opposed to make this more flexible but so far I have not heard
of a good usecase where the flexibility to detach the shutdown oder from
the startup order was necessary.

Now, what this means is that any program or script called during the shutdown process is in a race to complete before the services it depends upon disappear. There is no ability to pause the shutdown to wait for vital events to happen, such as the synchronising and unmounting of disks. This is completely broken, by design.

Emulating a Sun.

A week ago, after completing the installation of a Raspberry Pi into an old Sun CDROM drive external enclosure, I posted a picture of the enclosure on Facebook. The response from an old friend was, “Can you run SunView on it?”

20130825-193902.jpg

Of course, a Raspberry Pi is no Sun Workstation so, the answer was, not directly. However, I immediately did a Google search for Sun emulators and was very surprised to find that someone had actually written one. And so my project for the next week was born.

After downloading the source to TME (The Machine Emulator) and installing all the development libraries I tried to build the blighter. Unfortuately this was not as simple as you may have thought as the configure script had a bug which only showed itself if not on a x86 machine. This and the build system’s requirement for an ancient version of the libltdl module loading library for libtool took me nearly three days to work through. Still, I did now have a binary to play with.

It was then time to try it out. Somewhat sillily I followed some instructions I found on the Internet. These instructions included how to create a disk image file and then how to configure it from within the “tape” booted kernel’s format command. Following these instructions caused the emulator to Bus Error and crash when it tried to write to the virtual disk for the first time. I wasted a day trying to debug this… but it turned out that the instructions were wrong!

Having gone back to basics and used the size and parameters from a Seagate ST1480N disk, used in the SPARCstation 2, I was able to format, partition and install the miniroot. I thought this was the end of my problems… until I tried to install the OS.

The “suninstall” program just refused to be able to seek to the correct tape file for the installation files even though mt(1) worked perfectly. Puzzling. Another number of hours trying to find why this wasn’t working and I find that the VM config file needed parameters removing from the tape drive definition as the default only worked with NetBSD as the OS and not SunOS. :-/

Everything seemed good. After a couple of hours I had a working VM on the Pi and it didn’t even seem that slow. So, I logged in and started SunView. Woohoo! Task complete, almost. The mouse emulation didn’t work properly. I thought that maybe it was due to the VM running too slowly to read the input correctly. Still, I could a least post “task complete” as I did have SunView running, which is what Richard had asked.

20130825-194105.jpg

It now took me another day or so of debugging to determine that the Mouse Systems mouse emulation was broken when encoding the movement delta values. The original code seems to require the compiler to pretend that hex values are signed integers when assigning to unsigned 8 bit integers. Well, dodgy!

Having fixed that bug all now works, if rather slowly.

Given the interest on Google+ and Twitter about this emulation I’ve spent today creating an automatic download, configure and build script with an easy to use prototype VM creation system for an emulated Sun3 running SunOS 4.1.1, which you can download here.

On the Origin of Technological Civilisation.

This morning a friend posted an image of a supernova on Facebook and wondered just home many civilisations died as a result. Now, if you take the standard Drake equation and use that as a basis of your estimation of technological life and hence civilisation then you may get the idea that at least one did, given the massive gamma ray bust associated with such an event. However, I don’t believe this at all, and here’s why:

The parameters usually plugged into the Drake Equation assume that eventually that the development of technological lifeforms is almost inevitable once you get life going, as is even the equivalent of multi-cellular and complex lifeforms before this. Given my perspective as an Earth Scientist by training and hence knowledge about how the Earth, the only place we know life exists, I very much dispute these assumptions.

Life is common, complex life probably not so much

If we look at the Earth as an example of how life may develop on a planet we find from the evidence that simple, single celled life appeared pretty darned quickly after the end of the late bombardment where it would have been practically impossible for anything to survive. So, we can assume that this kind of life is probably likely to spring up almost anywhere in the Universe given similar starting conditions. So, life in the Universe is common.

However, after this great “leap” life got lazy. It didn’t really change a great deal for over 2.5 billion years. OK, it had to cope with the rise in oxygen and switch power sources but otherwise it didn’t do a great deal other than maybe become symbiotic and file its DNA away into a special container. Basically, there was no evolutionary massive advantage to change, so it didn’t.

From the fossil record it currently looks as though a global climatic event effectively pushed life to co-operate so as to survive in challenging environments. Without this push, life on Earth would probably still be single celled.

So, just about a billion years ago we got multi-cellular life… Woo-hoo! It took a while before this became complex though and it seems that only when some of these found eating other life to be a convenient method of energy collection did the arms race begin and complex life began.

It was all a big accident!

Climbing the ladder to technology? Maybe not.

So, from this slow start it only took about 500 million years to get to creatures which could potentially have enough brain power to be intelligent enough to wield tools. So, why didn’t we see technological dinosaurs?

Well, technological intelligence requires a couple of things, firstly the abstract, innate intelligence and flexible world modelling capabilities so as to visualise the tool and make the imaginative leap to think them up in the first place. Secondly, you need the twist of evolutionary fate which gives the organism the body parts required to fashion and use technology.

It’s becoming more and more apparent that many creatures from many strands of life are capable of the first part. Not only great apes or primates or even mammals but birds (i.e. dinosaurs) and even molluscs (Cephalopoda, i.e. octopus). However, most of these intelligent beings are handicapped so that technological advancement isn’t practical. They either don’t have the tools, don’t have the time, live in the wrong sort of environment or aren’t social.

Also, in many ways, the pre-requisites for being technological aren’t usually the best for long-term survival in an evolutionary sense. Generalists generally find it hard to compete against specialists, unless there are specific environmental drivers which cause the specialists to fail. Humans almost didn’t make it.

So, humans are an accident?

Basically, yes. We are an aberration. We only made it as a sheer fluke. Given the odds we shouldn’t be here at all and the planet Earth would be no different than it has been since the last great extinction.

So, what does this mean for the Drake Equation?

We have to remember that we’re looking down the wrong end of the telescope at this problem and hence get a very skewed idea. We are here to observe and hence is seems that that must be proof of the inevitability of us appearing. The original parameters of the Drake Equation reflect this and are, in my opinion given the evidence, several tens of orders of magnitude too optimistic.

Well, even given the hugely, mind bogglingly big numbers of potential life harbouring planets out there it’s very probable that only a really, really tiny percentage managed to get beyond single cellular organisms. Even then the combination of factors which would allow a society to develop technology and become a civilisation are so remote and actively discouraged.

Is there anyone else out there then?

Probably not. Sorry.

Given the odds it’s quite possible that we are the very first technological beings to exist within the Universe, given that the Sol system was possibly one of the first to appear after enough building blocks had been created by the previous generations of stars.  Even if we are not, given the number of star systems out there, the time scales involved and the probable life of any species being only a couple of millions of years at best we’ve probably missed the previous ones and others will appear after we’re long gone.

We are but a fleeting island in entropy’s march.