Retro-review: British “business” computers of 1985. The ACT Apricot F1 vs. Sinclair QL.

In 1984 Sir Clive Sinclair announced the QL as his answer to a “business computer”, costing £399, and the year later ACT released their follow-up to the Apricot PC, the F1, in a stylish new slim design with such innovations as wireless (infra red) keyboard and mouse (actually a track ball), costing around £3000.

Seeing as I’ve been rebuilding an Apricot F1 and I have a couple of QLs I thought I’d give a comparison review of the two machines side by side.

Well, back in 1984 when the Sinclair QL came out, to put it mildly, everyone poured scorn on the machine as a “business” computer. After all, where were the floppy drives? Did it run Lotus 1-2-3? Of course, the price played very little into this discussion as all the “business” machines were many multiples of the cost of a base QL. Having played with its direct(?) competition I think I can make a better comparison. (I’ll cut to the chase, in terms of speed and convenience, the QL wins, but with some pretty important gotchas.)

1. The Sinclair QL

The QL comes out of the box with 128K of RAM, two 100K Microdrive drives and the set of pretty decent office applications from Psion, including word processor and spreadsheet. If you want to add a monitor that costs you extra, adding a couple of hundred pounds to the cost, then add the printer for another £100. So, for a working “business” system you’re probably looking at £700-ish in 1985.

In terms of convenience, you plug it all in, switch it on and you have a working system. You probably want to buy a set of Microdrive cartridges which I seem to remember retailed at about £5 each and then make working copies of the Psion Exchange suite.

Popping an application cart into the first drive and a file cart into the second drive, resetting and hitting F2 to boot would bring you to the Quill word processor in about a minute.

In terms of speed, the text update is slow relative to IBM PC-XT clones of the day due to the bitmapped graphics, but they were well out of the price range costing many thousands of pounds. However, for a word processor the screen update was adequate and Quill was definitely up there with the early versions of Wordperfect etc. in terms of functionality.

With respect to the much maligned Microdrives, they were slow to “seek” to find a file but could be surprisingly fast to load one if the data was contiguous on the tape as the transfer rate is technically higher than that of standard double-density floppy drives.

The biggest down side of the QL, however, is that it doesn’t have a separate keyboard. With everything being in the same unit it makes ergonomics poor as it’s difficult to move the unit to a comfortable position. Not that ergonomics was really a thing back then. The keyboard itself wasn’t brilliant to type on either. It’s an early iteration of the plunger and membrane keyboards most systems use today but it’s just clunky. The lack of a dedicated delete key is another large omission.

Expansion of the machine was possible with, by the time of 1985, both memory upgrades and floppy disk interfaces available. However, they added to the cost and the number of cables attached to the keyboard making ergonomics even worse.

2. The ACT Apricot F1

The F1 was a follow up to the Apricot PC, which was a boxy luggable desktop machine. The new F1, had a stylish clean, new design and, unlike its sibling, a single 3.5″ double density floppy disk drive built in. It also had an innovative (but ultimately problematical) infra red wireless keyboard. Again like its sibling the time of day clock was held in the keyboard and not in the main unit, so to set it you had to hit a key after each reboot.

The F1’s keyboard keys actually looked very similar to the Sinclair QL’s, being a flat square with a raised circle. Apparently this keyboard didn’t have a great feel to it either, being replaced with the version with more normal key caps and a better mechanism for the later F2 and F10 models.

The Apricot machines were based upon an Intel 8086 processor clocked at the same speed as the 8088 in the IBM PC-XT. It would have been expected therefore that they would run more quickly due to the data and address buses not being multiplexed 8 bit ones but this is not the case as memory accesses were interleaved with the video circuitry slowing the whole system down to the same speed.

Although the Apricots did run MSDOS (2.11) they were not at all IBM PC-XT compatible. Any software which didn’t use standard MSDOS calls only wouldn’t work and the screen display access was completely different so applications would have to be specially ported to the platform. Later ACT did create a compatibility shim application which added at least MDA/CGA display access via software emulation but it was very slow.

To do any graphics you have to load the Digital Research GSX graphics subsystem and use call to this, if you wanted your application to run on all the Apricot machines, as the graphics hardware on each was very different.

So, what did you get for your £3000 F1 system? Basically, you got the machine, a wireless keyboard and MSDOS 2.11 with the “Activity” graphical user interface. The machine had a composite video out port as standard, along with the proprietary 9 pin RGBI port. If you wanted a monitor then at was a couple of hundred pounds extra for the 9″ monochrome display and a lot more for the 12″ colour one. If you wanted applications then they were extra, a couple of hundred pounds extra for each. There was a native port of Wordstar for your word processing needs and a spreadsheet, which I’ve not found as a floppy image on the ‘Net.

So, what was the user experience like?

Well, when you powered the unit on (with a power switch, something the QL lacks!) it comes to the “BIOS” screen which is quite nice. You then have to hit the “TIME/DATE” key on the keyboard and put the OS floppy into the machine. The machine will then (slowly) boot MSDOS and bring you into the “Activity” GUI, which you have to navigate using the number keypad (You didn’t pay that extra £100 quid or so for the “mouse” trackball which isn’t used by anything did you?). Ejecting the OS floppy and then putting the application floppy into the machine will allow you to start the word processor. Yes, there’s lots of disk juggling going on due to the single drive.

So, after about 5 minutes you will probably be into your application.

The screen display update speed is very similar to that of the QL as there’s no hardware acceleration other than scrolling. The graphics capability of the machine is actually almost identical to that of the QL.

With the base system only coming with 256K of RAM and the OS being in RAM as well the working memory is actually less than the base 128K QL.

As for expandability, there is a slot internally for upgrades and another on the side. There was a 10MB “Winchester disk” external unit which would probably cost you almost as much as the machine in 1985. There was no option to add an external floppy drive, though the hardware is capable and it would have only meant them adding a header on the motherboard as the clip-out panel in the case was there ready. Even with a two floppy drives MSDOS is painful to use though.

3. Conclusion.

Well, to be honest it’s six of one and half dozen of the other actually. In terms of speed the 7.5MHz 68008 in the QL, hampered by its multiplexed 8 bit bus, is about the same as that of the Apricot with its 4.7MHz 8086 in use. The single floppy drive is a major problem when using the machine and it “feels” slower than the Microdrives not because of the data rate (600rpm drives make the data rate far higher) or the latency, which is far less, but because all accesses are in the foreground and you have to swap disks so much more often. It makes even copying a file from one floppy to another a major task with four disk swaps per file. In this way the F1 is far less usable.

However, the separate keyboard, irrespective of the problems with the wireless nature, is far better than the “all built into the keyboard” form factor of the QL.

Anyway, if the purchase price wasn’t a decider, given my experience operating the two machines today I find the experience with the base QL far superior and I would have preferred to use that.

The Apricot F Series with more memory, hard disk and the later release of the GEM graphical front end does make the machines a world better, but Apricot (rebranded from ACT in 1986) soon dropped them in favour of PC compatibility, and a price hike, and the innovative British business PC era came to an end.

EDTracker Setting Up and Use with Elite: Dangerous

Over last weekend I attended the Lavecon event just outside Northampton. This event came out of the fan response to the development and release of Elite: Dangerous from Frontier Developments and started off a couple of years ago as merely a get-together of those who produced the Lave Radio show and a few others. It’s now a quite large two day convention.

Anyway, displaying (and selling) there wares were the team who have been creating a low-cost head tracking device, called EDTracker, which allows the wearer to change the view in a compatible game according to the direction that their head is pointing. This gives a rudimentary “virtual reality” experience without the head mounted display. Seeing as the latest “Pro” unit, which contains a magnetometer as well as accelerometers, was on sale for £40 I thought it worth a try and bought one.

Given the slightly fiddly nature of the set up and the EDTracker web site not having an up-to-date set-up procedure I thought I’d write my own set of set-up instructions from the point of view of a user rather than a developer and put everything in one place.

Setting up the EDTracker Pro for the first time.

Setting up the hardware:

EDTracker ProThe first thing that you need to do once you get the device out of the box is to mount it on some sort of head gear. Most people fit it to the top of some headphones as they’ll be using those with the game anyway. At Lavecon EDTracker were selling rubber bands to do the job, but you may want to use something more permanent.

Fit the unit on the top so that it’s level when you wear the headphones and with the USB port pointing towards your left ear. Contrary to common sense, the box needs to be fitted with the screws uppermost.

Now sit the headphones on something so that the device is level and stationary. I found that hanging them upon my joystick worked well.

Plug the USB cable into the device and secure the cable to the side of the headphones headband so that it doesn’t pull on the box and make it move. Then plug it into your PC.

Windows will now detect the device and attempt to load a driver. Ignore any advice on the EDTracker website about the rest of the set-up as this is NOT for the Pro model.

After a while Windows will tell you that it’s installed the driver. This can take a number of minutes, so be patient.

If you now open the “Devices and Printers” tool in Windows you will now see a new gaming device, the EDTrackerPro. The Pro pretends to be a joystick.

Setting up the software.

You will now need to download some software to configure and use the device. Unfortunately the EDTracker website download section is not really helpful here and is hard to navigate but what you need is the EDTrackerPro GUI. (Click on the link to download it.)

To work with Elite: Dangerous without further configuration of the game you’ll also need opentrack 2.2.

Neither of these pieces of software use a Windows installer, so it’s easiest if you create an EDTracker (or some other folder) on your desktop and unzip them within that for easy access as you’ll need both of them every time you want to use the device.

Once you’ve done this go into the EDTrackerPro_GUI folder and run “EDTrackerPro”

A window should now open with a green bar over the top of the bald head saying “EDTracker Connected”.

EDTrakerPro Main WindowChange the settings so that they match those in the above figure. Now click upon the “Magnetometer” tab and the display will change…

EDTrakerPro Magnetometer DisplayYou now need to calibrate the magnetometer within the device so that it can use the local Earth’s magnetic field as a reference to prevent the tracking drifting over time. Once you click on “Start Calibration” a video will pop up in a new window showing you what you need to do. Basically it involves rotating the headphones 360 degrees in each orthogonal direction so the local magnetic field can be mapped. Pick up the headphones and do this a few times and then click on the “Save Calibration” button. You should only need to do this once for each location you use the device in. If you always use it in the same place then you only ever need to do it once.

Now click back on the “Head” tab to get back to the original view and hang the headphones back on the joystick, or whatever you have been using. Without touching the headphones click on the “Auto Bias” button. This will allow the system to calibrate the accelerometers so that they don’t drift.

Put the headphones on, look straight ahead and click on the “Reset View” button to set the “centre” position.

Right. That’s the device configured, now for the interface software for the game. You can close the EDTrackerPro program.

Navigate to where you unzipped opentrack and then down to where the program files are lurking. Run the “opentrack” program. This will need to be started and kept running whenever you wish to use the EDTracker Pro within Elite: Dangerous as it provides the software glue.

opentrack-1To begin with opentrack will not see the EDTrackerPro but may show the name of your webcam. Under “Main tracker” click on the top button and select “Joystick”. Now click on the “Settings” button and you should be able to select “EDTrackerPro” from the list.

Now, under the “Game protocol” section select “FreeTrack 2.0” as in the figure above. So as to damp any small fluctuations due to reading errors and the like select the “Accela Filter Mk4”.

opentrack-keysClick on the “Keys” button to set the hot keys to turn the tracking on and off and to re-centre. I’ve chosen F12 and F11 in the above example. Click “OK” to return to the main window. You can now click on the “Start” button.

We’re not finished yet as at there is no mapping between the EDTracker’s motion and the output of the program. To set this up click on the “Mapping” button.

opentrack-output-mapping-optionsFirst select the “Options” Tab. Tick the “Invert” box for each of the three axes.

opentrack-yawNow click on the “Yaw” tab. Put your mouse pointer in the bottom right corner of the top graph, click and drag upwards. You will see a point and a line move with your cursor. The diagonal line is the mapping between the input value from the EDTracker’s Yaw axis and the output of the program. You can create more control points by clicking on this line and then you can drag them to make a curve or increase the sensitivity of the tracking. The curve I’ve created in the figure above gives a dead zone close to the centre so that small head movements when looking around the cockpit at the front don’t cause any motion and then a smooth motion after that to a full look to the left or the right by turning my head ~65 degrees.

opentrack-pitchSimilarly you need to do the same for the “Pitch” axis.

opentrack-rollAnd finally for the “Roll” axis. I found that the roll was giving me problems and so I decreased the sensitivity greatly.

You can now close the mapping window and save the settings. I created an new settings file for Elite: Dangerous, a copy of which you can download here.

You can now start Elite: Dangerous and the game will automatically use the head tracker! No configuration is necessary. It’s best to test it out using the first of the tutorial missions. You can change the mapping, as above, to suit your tastes.

Note: If you have issues with the mappings bouncing around or being 90 degrees out it’s probably due to Windows “calibrating” the “joystick”. Go into the “Devices and Printers” tool, open the properties for the EDTrackerPro and reset to default. This should sort it out and was discovered at Lavecon by one of the EDTracker team when I had issues.

Day-to-Day Use.

Every time before you use the EDTracker Pro you will need to do the following:

  • Run the EDTrackerPro GUI program, hang the headphones up as above and run the “Auto Bias”
  • Only if you are in a new location you will need to re-calibrate the magnetometer.
  • Run the opentrack program and click on the “Start” button.

Once you’ve done this you’re ready to run Elite: Dangerous and play!

[Edit: 2015/11/15 12:10 GMT]

I’ve now uploaded a new version of my OpenTrack configuration file without any real dead-zones. I’ve been using this for a few months now and it works well and doesn’t have a real dead-zone in the centre and also doesn’t give me any motion sickness effects.

[Edit: 2016/01/24 09:45 GMT]

Changed link to software to point to the new Pro website, http://www.edtracker.co.uk/

[Edit: 2017/02/18 15:40 GMT]

Updated link to EDTrackerPro GUI installer to point to the latest MSI installer version.

2014 – The year of retro-computing.

For me, other than health issues which I won’t say any more about here, 2014 has been a year dominated by retro-computing sparked off by the call from the Museum of the History of Science in Oxford for exhibits for an exhibition they were setting up.

The call went out in January, a little before my previous post (this is called “irregular” for a reason), and I responded by offering the use of my archive of machines. In the end they took up the offer of a Sinclair QL and a BBC Micro.

Of course, this meant that I had to get the machines down from the loft to check them out, clean them and generally prepare them for the exhibition, which was going to start in May.

The machines themselves proved to be in fine fettle but the floppy disk drives needed a bit of coaxing. The worst problem was that the lubricant used in the 1980s on the drives had mostly dried out to a sticky goo, the opposite of a lubricant. This took a little while to sort out. Another issue was getting a way to display the computers’ output.

My Sinclair QL showing the appalling echoing and smearing on the TV

My Sinclair QL showing the appalling echoing and smearing on the TV

The LCD TV I used in my spare room as a monitor for my server does have an analogue TV input, along with HDMI, SCART, VGA etc. so it shouldn’t be a major problem. However, it’s obvious that the company Dixons/Curry’s chose to build their Sandstrom TVs never actually tested the analogue circuitry very well. Displaying anything with a defined, high-contrast signal produces at best multiple echoes of the point/line across the screen or, at worst, a smeared bright line to the left of the pixels. Not exactly useful for anything other than a quick test.

This problem induced me to have to build new SCART video cables for the machines. A fiddly job at the best of times but not helped by the TV’s insistence in only displaying composite video through the SCART connector unless one pin is energised with 3 volts and with no way of overriding this in the menu system. Quite a pain when the computers required to connect don’t produce this voltage (and it’s not available from the TV’s SCART either). So, a battery box had to be installed and velcroed. What a pain.

Anyway, by the time Easter came I had everything ready and delivery to the museum was planned after I got back from Cornwall. Unfortunately my health intervened and I was stuck down South for rather longer than expected, so missing the deadline for delivery. The museum managed to find a BBC Micro from someone else, along with my friend, Janet’s Beeb so they could set up without my stuff initially.

Thankfully I got back to Oxford just before the exhibition was to begin, so the team from the museum rocked up in a van and collected my QL, screen and floppy and they were deployed just in time for the opening.

Unfortunately I was still too ill to go into town so a friend took a picture of the set-up for me. I was so glad everything had turned out OK.

The Sinclair QL and BBC Micro doing public duty in the "Geek is Good" exhibition. (Sorry for the camera shake.)

The Sinclair QL and BBC Micro doing public duty in the “Geek is Good” exhibition.
(Sorry for the camera shake.)

A few weeks later, after I’d become well enough, I travelled into town myself and took a look at the exhibition, “Geek is Good.” and realised that the materials in the hands-on display area were rather basic and used an arcane BASIC programming example which was really dull and, well, very 1970s.

This spurred me into action. Being too unwell to go into work did allow me some time to use the energy I had to create a set of three programs for the BBC and the QL, exact equivalents in both BASIC dialects, and a crib sheet for program entry, explaining how to edit lines and simplify the typing in using the “AUTO” command. Far better.

All this action at the museum also inspired me to “play” with the other machines in my collection, discovering in the process a few of them starting to die. For example, the Atari TT030 which seems to have developed a floppy controller fault, and the Acorn A4000 ARM machine where the rechargeable  motherboard battery has burst, corroding the tracks and then the power supply died after I fixed this. These are really annoying failures as they’re the rarest of the machines.

The ethernetted Sinclair ZX Spectrums in the basement of the Museum of the History of Science for the "Geek Out!" event.

The ethernetted Sinclair ZX Spectrums in the basement of the Museum of the History of Science for the “Geek Out!” event.

Anyway, I’ve had great fun with all this culminating yesterday with the Museum of the History of Science’s “Geek Out!” event, closing the year with ethernet networked ZX Sepctrums in the basement running games served from a MacBook followed by playing a symphony and BBC Micros in the upper gallery.

The display in the Upper Gallery of the Museum of the History of Science.

The display in the Upper Gallery of the Museum of the History of Science.

I’ve been testing and checking a second BBC Micro for the display all week, duplicating floppy disks of games all ready for yesterday morning. After arriving before 9am with the kit I took it up stairs and attempted to get it working. Hampered by not having the correct monitor cable I soldiered ahead but found that the floppy disk would no-longer read disks. Worse, the other BBC Micro wouldn’t do so either. Even after swapping drives etc. between the machines nothing worked!

To save the day I asked Scott, the exhibition organiser, to get my Sinclair QL out of the store along with the floppy drive and a copy of “Arcanoid II”. The Sinclair saved the day!

A BBC Micro with a board allowing an SD card to be used as a floppy disk drive and my Sinclair QL play games.

A BBC Micro with a board allowing an SD card to be used as a floppy disk drive and my Sinclair QL play games.

Despite only having two games machines plus a BBC Micro sitting there for people to program upon the day went really well. The crowds were really interested and the kids were having a ball. Surprisingly the BBC Micro set up for programming was as popular as the machines running games. One late teenager, who I think was South American, was fascinated with programming and asked where he might be able to get a BBC Micro. Also, a group of Italian Computer Science students found the machine highly interesting and wondered at how much could be done with so little coding. I think they may be searching for BBC Micro emulators now!

And so, the end of the event came. I didn’t realise how tiring the day had been until I got home. I was shattered and almost fell asleep eating my dinner. Still, it was a good day.

And now, the computers are back in their home in the loft. I’ll probably get the Beebs down one day to try to diagnose the floppy problem but probably not this year. I’ve got other things to do, such as play the sequel to the 1984 smash hit on the BBC Micro, “Elite: Dangerous”!

Docking at a Coriolis station in the original Elite on a BBC Micro and in Elite:Dangerous on a PC.

Docking at a Coriolis station in the original Elite on a BBC Micro and in Elite:Dangerous on a PC.

Right on Commanders!

 

It’s been 30 years since the announcement of an important computing product…

and I’m not talking about the Macintosh.

QL Launch Macintosh Launch

In the news at the moment there’s been a number of stories about the launch 30 years ago of the Apple Macintosh with its flash media event and slick marketing “Big Brother” advert. However, two weeks previously in a lower key event a 3rd of the planet away in a London hotel there was another event, another launch of another computer which is rather less well known but was a breakthrough in many ways and the stepping stone to other, greater things.

QL-Flyer-1984-Inside

The inside of the original flyer advertisement distributed inside magazines in January 1984.

1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimped1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimped1984-02-04_PCN_Sinclair_QL_advertisement_doublepage-SCN04-pimpedNow the Sinclair QL was in many ways a flawed design, mostly due to some really silly design decisions such as using the 4 bit Intel co-processor to do keyboard input, sound output and serial port input. This poor, underpowered chip could just about do one of those jobs at a time but not two, which meant that if you played a sound you couldn’t read the keyboard properly and you certainly couldn’t accept any data on either of the serial ports.

However, this ignores what it did provide. It was the first “affordable” 16 bit computer system with a fully pre-emptive multitasking, modular operating system. It may have been aimed at the business consumer, a boat long sailed by this point, but it found a niche in the programming community.

A Mac Plus I rescued.

A Mac Plus I rescued.

Now, you may be screaming by this point, “But the Macintosh was far more influential and it’s still here!” Well, I’d agree that the Mac did bring a huge leap forward in usability and design, for a price. However, the machines which are sold as Macs now have very little to do with that cream box with a handle and a screen launched all that time ago. MacOS now is not a descendant of that original Mac operating system at all, it’s a direct descendant of NeXTstep, the BSD UNIX derived OS running on the NeXT Cube, from Steve Jobs’ other company. Even Postscript and the other innovations didn’t come along with the first release of MacOS, they happened later when the Laserwriter was created. It should also be remembered that the Macintosh was not the first computer marketed which had a WIMP environment and a mouse, there was the Apple Lisa before it. In many ways the Macintosh was the Lisa-Lite and most of the launch applications were quick ports of the Lisa applications.

On the matter of cost, when the Macintosh was released it may have been affordable to a few in the USA but it was well beyond anything the normal person in the UK could afford. If you could find an Apple dealer the price of a Macintosh started at around £1300 and went up steeply if you wanted to actually do anything. This is why they were so rare. The first one I saw was in 1988 in the Pi Magazine office, University College London.

On the other hand, the Sinclair QL was launched at £399. This was still a huge outlay, being half to two thirds the monthly wage of a normal person, but at least you could use your TV as a display, it came with applications and, if you had a printer, could usually be cheaply hooked up to it. Sure, you could spend a whole lot more on a monitor and a printer but you didn’t need to.

Now, in many ways the QL was a bit of a flop, especially if you look at it relative to the ZX Spectrum. However, those machines which did go out there had a remarkably high impact in the longer term, even if it wasn’t at all visible.

Without the QL would there have been Linux? Linus Torvalds cut his teeth on one, got a little frustrated by the restriction and decided to write his own operating system. QDOS, for all its advanced features and modularity is no UNIX and it was written in a hurry, I can see how Linus may have seen the problems, but it was, in the end his muse and impetus.

The QL was also the development platform for the first version of AmigaDOS. Without the QL the Commodore Amiga would have been a very different machine to use. Metacomco in Bristol was contracted to build an operating system for the fledgling machine and because they had been early to support the QL, building compilers for it, they had the M68000 expertise and the tools for the job. AmigaDOS itself was a re-implementation of Tripos, an operating system developed in Cambridge, UK. and written in BCPL. Seeing as Metacomco already had their own BCPL compiler for the QL it was a perfect match. Later versions of the OS were re-written (mostly because Metacomco weren’t actually THAT good coders all told).

Anyway, I’d like to wish a very happy birthday to both the Sinclair QL and the Apple Macintosh. You both advanced computing in your own little ways. Both still have active communities (though the QL’s is “somewhat” smaller). May your legacy go on another 30 years.

The Sinclair QL I'm currently starting to prepare for an exhibition.

The Sinclair QL I’m currently starting to prepare for an exhibition.

I’m currently preparing one of my two QLs (the one I rescued from being recycled by the Oxford University Physical Chemistry department) as a hands-on display in an exhibition at the Oxford Museum of the History of Science which will be happening later this year.

By the way, if you have a QL in need of a few spare parts, take a look at “Sell My Retro”.

systemd is pants!

OK, let’s get this out in the open… systemd is *PANTS*

Phew! That’s better.

I’ve just spent most of the day trying to get a Linux system to reliably mount a disk attached via iSCSI over ethernet at boot time and, more importantly, to reliably get it to be unmounted at shutdown before the network rug is pulled from beneath it.

Now, in the days of the init scripts it was pretty easy to stuff a script in-between the networking coming up and nfs-server starting. It was also dead easy to make sure that your script ran at shutdown before the iSCSI and networking were closed down as well.

Now, with systemd the first part is more tricky as the program tries to do everything at once. If you want to have a native systemd service which starts before nfs-server then you have to modify that service description too. You might as well just have an init script which runs last which shuts down nfs-server before mounting the iSCSI disk and then starts it again when it finishes.

Now, it gets worse when the system is shutting down. Oh yes!

You see, systemd always tries to do things in a hurry. It seems that the design philosophy was better to do things quickly rather than correctly, and this is especially true at shutdown.

In a discussion thread on the systemd-devel mailing list titled “[systemd-devel] systemd and LSB Required/Should-Stop” it’s stated by Lennart Poettering:

On Fri, 24.06.11 14:04, Ville Skyttä (ville.skytta at iki.fi) wrote:

> Hello,
> 
> Am I right that systemd does currently have no support for the LSB
> Required-Stop and Should-Stop keywords?  If yes, are there plans to
> support them in the future?

That is true, we do not support this right now. In systemd the shutdown
order is always the inverse startup order of two services. I am not
entirely opposed to make this more flexible but so far I have not heard
of a good usecase where the flexibility to detach the shutdown oder from
the startup order was necessary.

Now, what this means is that any program or script called during the shutdown process is in a race to complete before the services it depends upon disappear. There is no ability to pause the shutdown to wait for vital events to happen, such as the synchronising and unmounting of disks. This is completely broken, by design.

Emulating a Sun.

A week ago, after completing the installation of a Raspberry Pi into an old Sun CDROM drive external enclosure, I posted a picture of the enclosure on Facebook. The response from an old friend was, “Can you run SunView on it?”

20130825-193902.jpg

Of course, a Raspberry Pi is no Sun Workstation so, the answer was, not directly. However, I immediately did a Google search for Sun emulators and was very surprised to find that someone had actually written one. And so my project for the next week was born.

After downloading the source to TME (The Machine Emulator) and installing all the development libraries I tried to build the blighter. Unfortuately this was not as simple as you may have thought as the configure script had a bug which only showed itself if not on a x86 machine. This and the build system’s requirement for an ancient version of the libltdl module loading library for libtool took me nearly three days to work through. Still, I did now have a binary to play with.

It was then time to try it out. Somewhat sillily I followed some instructions I found on the Internet. These instructions included how to create a disk image file and then how to configure it from within the “tape” booted kernel’s format command. Following these instructions caused the emulator to Bus Error and crash when it tried to write to the virtual disk for the first time. I wasted a day trying to debug this… but it turned out that the instructions were wrong!

Having gone back to basics and used the size and parameters from a Seagate ST1480N disk, used in the SPARCstation 2, I was able to format, partition and install the miniroot. I thought this was the end of my problems… until I tried to install the OS.

The “suninstall” program just refused to be able to seek to the correct tape file for the installation files even though mt(1) worked perfectly. Puzzling. Another number of hours trying to find why this wasn’t working and I find that the VM config file needed parameters removing from the tape drive definition as the default only worked with NetBSD as the OS and not SunOS. :-/

Everything seemed good. After a couple of hours I had a working VM on the Pi and it didn’t even seem that slow. So, I logged in and started SunView. Woohoo! Task complete, almost. The mouse emulation didn’t work properly. I thought that maybe it was due to the VM running too slowly to read the input correctly. Still, I could a least post “task complete” as I did have SunView running, which is what Richard had asked.

20130825-194105.jpg

It now took me another day or so of debugging to determine that the Mouse Systems mouse emulation was broken when encoding the movement delta values. The original code seems to require the compiler to pretend that hex values are signed integers when assigning to unsigned 8 bit integers. Well, dodgy!

Having fixed that bug all now works, if rather slowly.

Given the interest on Google+ and Twitter about this emulation I’ve spent today creating an automatic download, configure and build script with an easy to use prototype VM creation system for an emulated Sun3 running SunOS 4.1.1, which you can download here.

Openindiana: How could the developers go so wrong?

Well, today I’ve been playing with OpenIndiana, the OpenSolaris derivative created after Oracle killed off its ancestor.

Well, to say that I was rather disappointed would be an understatement. It’s rather obvious that the developers of the distribution are not system administrators of integrated networked environments otherwise they would not have made such stupid design decisions.

Anyway, here’s the story of my day:

I downloaded the live DVD desktop version initially as I assumed that this would, when installed, effectively replicate a Solaris desktop environment. Seeing as Solaris in this configuration is capable of being a fully functional server as well I assumed that this would be the case for Openindiana.

So, I created a virtual machine under VirtualBox on the Mac, booted the DVD image and started the install. I was surprised about how little interaction there was during the install process as all it asked about was how to partition the disk and to create a root password and a new user. After the install things went down hill.

Now, it seems that the Openindiana bods are trying to ape Linux. When you boot up you get a GDM login screen, but can’t log in as root. So, you log in as the user you created, not too much of a problem, except that you now can’t start any of the configuration applications, they fail silently after you type the root password. You can’t sudo commands as it says that you don’t have permission…

Finally, I managed to get past this roadblock by trying ‘su –‘ which then asked me to change the root password! Once this was done I could actually run the configuration utilities. Not that it got me very much further, as there seems to be no way to set a static IP address out of the box.

I decided to trash that version and download the server version DVD. Maybe that would be better? Surely it would, it’s designed to be a server…

I booted the DVD image and the text installer started, very similar to the old Solaris installer to begin with, except all it asked about again was the disk partitioning, root password/user creation and networking, giving only the options for no networking or automatic network configuration. There was no manual network configuration! What?!!!! This is a server install!

Also missing from the installer was any way of setting up network authentication services or modifying what was installed. The installer had been lobotomised.

Once the OS had installed and booted up there were some more nasty surprises. Again, you couldn’t set a static IP address and any changes to the networking were silently reverted. It was only with some Googling that I managed to hunt down the culprit, the network/physical:nwam service, which is the default configuration. WHY?!!! This is a SERVER not a laptop!

Once this was fixed I managed to at least get a static IP address set up but it’s far more convoluted than with Solaris 10 or before.

Other strangeness in the design… All the X installation is there, except for the X server. Eh? What’s the point of that?

By default the GUI package manager isn’t installed. Once you do, however, it’s set up by default not to see any not installed packages, which is confusing. If you know where to look you can change this but it’s a stupid default.

Getting NFS client stuff working was a challenge as well. When you manually run mount everything seems to work out of the box. NFS filesystems mount fine and everything looks dandy. So, you put some mounts into /etc/vfstab and ‘mount -a‘ works as expected. Reboot, however, and nothing happens! This is due to the fact that most of the NFS client services are not turned on by default but magically work if you run mount. Turning on the nfs/client:default service doesn’t enable the other services it requires, however, but you don’t see this until a reboot. Stupid! It should work the same way at all times. If it works magically on the command line it should work at boot as well and vice versa. Unpredictability on a server is a killer.

On the bright side, at least the kernel is enterprise grade.

Weather based computing definitions.

Cloud Computing

A computing resource located “out there” somewhere, connected to the Internet and operated by a third party.

When the heat is on, just like real clouds, they can either evaporate or become a storm (see Monsoon Computing). In either case it’s not good news.

Fog Computing

Like Cloud Computing but down to earth. i.e. based in reality and generally under the organisation’s direct control. Often called a Corporate Cloud Computing resource.

This generally hangs around longer than is required but never lets the temperature get too high.

Mist Computing

You’re sure that you purchased the equipment for your corporate cloud computing resource, but you can’t see very much of it and it’s not a lot of use.

Very Light Drizzle Computing

You’re pretty sure that there must be a computing resource somewhere, you can feel it, but you can’t find it.

Drizzle Computing

You seem to have a large number of light-weight and low powered computing systems for your processing. However, all they seem to do is annoy you and never actually do anything useful.

Rain Computing

You have a large number of independent computers all working to solve your problem, or at least dissolve it.

Stair-Rods or Monsoon Computing

Somehow you seem to have huge numbers of high power processors on your hands, all working on your problem uncontrollably. Unfortunately, the upshot of this is that your problem isn’t solved, it’s washed away by the massive deluge of cost and possibly information overload.

So, do you have any more/better amusing definitions for weather analogous computing names? If so post them as comments below.

NotSoBASIC

As discussed in a previous posting, I’ve been musing over the development of a modernised version of the classic procedural BASIC language, especially with the Raspberry Pi in mind.

With this in mind I’ve been setting out some goals for a project and working a little on some of the syntactical details to bring structures, advanced for-loop constructs and other modern features to a BASIC language as backwardly compatible with the old Sinclair QL SuperBASIC as possible.

So, here are the goals:

  1. The language is aimed at both the 13 year old bedroom coder, getting him/her enthused about programming, and the basic needs of general scientist. (Surprisingly, the needs of these two disparate groups are very similar.)
  2. It must be platform agnostic and portable. It must also have a non-restrictive, encumbered license, such as the GPL, so probably Apache, so as to allow it to be implemented on all platforms, including Apple’s iOS.
  3. It must have at least two, probably three, levels of language, beginner, standard and advanced. The beginner would, like its predecessors in the 8bit era, be forced to use line numbers, for example.
  4. It must have fully integrated sound and screen control available simply, just as in the old 8bit micro days. This, with the proper manual, allow a 13 year old to annoy the family within 15 minutes of the person starting to play.
  5. The graphical capability must include simple ways to generate publishable scientific graphical output both to the screen and as encapsulated Postscript, PDF and JPEG.
  6. The language must have modern compound variables, such as structures, possibly even pseudo-pointers so as to be able to store references to data or procedures and pass them around.
  7. The language should be as backwardly compatible with Sinclair QL SuperBASIC as possible. It’s a well tested language and it works.
  8. The language should be designed to be extendable but it is not envisaged that this would be in the first version.
  9. The language IS NOT designed to be a general purpose application development language, though later extensions may give this ability.
  10. The language will have proper scoping of variables with variables within procedures being local to the current call, unless otherwise specified. This allows for recursion.
  11. All devices and files are accessed via a URI in an open statement.
  12. Channels (file descriptors) must be a special variable type which can be stored in arrays and passed around.

As I said earlier, I’ve been thinking about how to do a great deal of this syntactically as well. This is where I’ve got so far:

[Edit: The latest version of the following information can be found on my website. The  information below was correct at 10am 23rd February 2012.]

Variables.

Variable names MUST start with a alphabetic character and can only contain alphabetic, numeric and underscore characters. A suffix can be appended so as to give the variable a specific type, e.g. string. Without a suffix character the variable defaults to a floating point value.

Suffixes are:

$ string
@ pointer

Compound variables.

Compound variables (structures) can be created using the “DEFine STRUCTure” command to create a template and then creating special variables with the “STRUCTure” command:

DEFine STRUCTure name
varnam
[…]
END STRUCTure

STRUCTure name varnam[,varnam]

An array of structures can also be created using the STRUCTure command, e.g.

STRUCTure name varnam(3)

The values can be accessed using a “dot” notation, e.g.

DEFine STRUCTure person
name$
age
DIMention vitals(3)
END STRUCTure

STRUCTure person myself, friends(3)

myself.name$ = “Stephen”
myself.age = 30
myself.vitals(1) = 36
myself.vitals(2) = 26
myself.vitals(3) = 36

friends(1).name$ = “Julie”
friends(1).age = 21
friends(1).vitals(1) = 36
friends(1).vitals(2) = 26
friends(1).vitals(3) = 36

As with standard arrays, arrays of structures can be multi-dimentional.

Structures can contain any number of standard variables, static arrays types and other structures. However, only structures defined BEFORE the one being defined can be used. Structure definitions are parsed before execution of the program begins. Structure variable creation takes place during execution.

Loops.

FOR/NEXT:

FOR assignment (TO expression [STEP expression] | UNTIL expression | WHILE
expression) [NEXT assignment]
..
[CONTINUE]
..
NEXT [var]

The assignment flags the variable as the loop index variable. Loop index variables are normal variables.

The assignment and the evaluation of the assignment expression happen only once, when entering the loop. The test expressions get evaluated once every trip through the loop at the beginning. If the TO or UNTIL expressions evaluate to zero at the time of loop entry the commands within the loop do not get run.

The STEP operator can only be used if the loop index variable is either a floating point variable or an integer. The expression is evaluated to a floating point value and then added to the loop index variable. If the loop index variable is an integer then the value returned by the expression stripped of its factional part (as with ABS()) before being added to the variable.

WHILE/END WHILE:

WHILE expression [NEXT assignment]

[CONTINUE]

END WHILE

Equivalent to a FOR loop without an assignment using the WHILE variant e.g.

x = 10
WHILE x > 3 NEXT x += y / 3

END WHILE

is equivalent to

FOR x = 10 WHILE x > 3 NEXT x += y / 3

NEXT

DO/UNTIL:

DO

[CONTINUE]

UNTIL expression

The commands within the loop are run until the expression evaluates to a non-zero value.

Functions and procedures.

A function is merely a special form of a procedure which MUST return a numeric value. The suffix of a procedure determines its type, in the same way as variable names.

DEFine PROCedure name[(parameter[,parameter[…]])]

[RETURN expression]
END PROCedure

DEFine FUNction name[(parameter[,parameter[…]])]

RETURN expression
END FUNction

Parameters are local names with reference the passed values by reference. This means that any modification of the parameters within the procedure will change the value of any variables passed to it.

Variables created within the procedure will be local to the current incarnation, allowing recursion. Variables with global scope are available within procedures but will be superseded by any local variables with the same name.

Joining the fast lane: Fibre to the Cabinet broadband Internet access is here.

Well, after quite a wait the Cowley BT telephone exchange has finally been enabled for Fibre to the Cabinet (FTTC) broadband. Even using BT’s own estimate, the exchange has been nearly two month late coming on-line.

So, what does having the new service involve?

Well, other than a hefty £80+VAT fee, it merely requires a BT Openreach engineer to visit your house and install a modem and additional face-plate filter onto the house’s master “line” socket and then go to the street cabinet containing your connection to rewire it. You will also need a firewall/router which can talk PPPoE. In other words, one which can use a network cable instead of a phone cable. These are the same as those used with Virgin Media cable-modems.

Although BT (via your ISP) will inform you that the process will take up to an hour, in fact it takes a lot less time than this, It’s about 5 minutes for the engineer to unpack the new modem and fix the faceplate and then a further 10 minutes while he hunts for the correct street cabinet and re-wires your phone line. Assuming that you have your router fully set up beforehand that’s it. He just does a few tests and leaves.

In my case, I had a Billion BiPAC 7800N router which can do both ADSL (phone line) and connect via a network cable so all I needed to do was change a setting and reboot it.

So, this, after some tidying, is my new communications system:

Now that everything’s wall mounted and I’ve put all the wires into a conduit it looks a whole lot neater than before. Also, it’s unlikely to be knocked or cables snagged.

At the top of the picture you can see the Billion router. It’s not much to look at but it is a superb router. I do like the way that it can be mounted vertically on the wall, thus taking less space laterally.

Below the router is the BT modem. Thankfully this is the mark 3 model so is less likely to die horribly.

Finally, connected directly into the wall power socket is the Devolo 200Mb/s power line networking module. This connects to a similar unit in the spare bedroom, where my server sits, and to a multi-port power-line network switch in the living room to which is connected the TV, PS3 and amplifier.

So, what does all this shiny new equipment give me over and above what I had before? Other than the 10 times download speed increase and the four times upload speed jump, it also means that the connection should be far more stable. I’m also only paying about £3 more for this service than I was for the ADSL MAX service I was previously on and I get an extra 30GB of download quota bundled in with it.

Basically, I’m happy with it and that’s all that matters.

[Edited to add historical broadband speed test data]