My 30 year personal computing odyssey… So far.

The Journey Begins.

Sinclair ZX81

It was almost precisely 30 years ago today that my journey into the world of computing began. I remember the day that my parents bought the Sinclair ZX81 which was to become my Christmas present, we’d gone to Bedford to buy it in W.H.Smiths and it came in a brown cardboard box with nothing printed on the outside. We’d then all got into the car and whilst we drove up past St. Neots towards some shop on the Cambridge road I was able to open the box and start to read the manual. (We didn’t find the shop in the end and I can’t remember what it was supposed to be selling. Instead we turned back at a small roundabout and drove home.)

At the time I thought of computers as literally magical things. I’d seen them on “Tomorrow’s World” where a year or so before they were extolling the new technology which now cost less than a thousand pounds (showing the TI 44/9). Other than this I’d only seen computers on “Horizon” or in science fiction but here, now was one sitting in a small box on the back seat of my parents’ car beside me. I also marvelled at the ZX81 manual with its painting of a science fiction inspired landscape. (Why are computer manuals so much more boring these days?)

As for programming, at this time I’d only overheard conversations from my class mates at my new school who had had some lessons in the science block. They talked of mystical incantations and something to do with “print, print, print.”

Of course, this being a Christmas present, once we got home it was put away in a top cupboard, out of sight. But still, that was the beginning of the journey.

One Small Computer For A Man…

And so, Christmas Day came and I was at last able to get my hands on the ZX81. It was set up on a chrome steel and glass coffee table and connected to our old “Elizabethan” 12″ black and white portable TV which we’d used in the caravan on holiday. I already had a “Binatone” cassette recorder, which I remember getting for my birthday in ’77, but at this point it wasn’t able to be used as I had no tapes with software on. However, the Christmas of 1981 was spent cross-legged tapping at the flat plastic membrane keyboard typing in the examples from the manual.

It wasn’t long, however, until I soon hit the limit of the 1K memory, so my progress stalled for a while. It wasn’t until my birthday in February that I managed to get the 16K RAM Pack. Wow! How could anyone fill a whole 16K?! Well, I certainly couldn’t.

Anyway, at this point I think I should start compressing the time scales otherwise this post will become a book. Suffice it to say that the ZX81 was my mainstay computer for a further 15 months and it taught me the basics. It also taught me how to be patient after spending one and a half days typing in hex code out of the “Your Computer” magazine only for a thunderstorm to wipe out my work. A further two days of typing later and a rudimentary “Space Invaders” game was ready to play, which worked for about a minute until it crashed due to a typo somewhere in the pages of code.

The Steady March of Progress.

In the May of 1983 I finally persuaded my Dad to help me buy a replacement computer, a ZX Spectrum 16K. At the time this cost a huge amount, £125. Well, at the time £125 WAS a lot of money, at least for my family. Of course, the timing was awful as only a couple of weeks later Sinclair dropped the price of the Spectrum so that £125 would get your the 48K model. Later in the year I sold the ZX81 to one of my Dad’s work mates so I could buy a Fox Electronics 48K upgrade as many of the games I wanted to play by now required the larger memory. (Can you remember when games were all £4.99? Wasn’t it a scandal when they suddenly jumped to about £6 a pop?! :-)) I later bought the ZX81 back from the person I sold it to for a profit and it’s now in my loft.

The Speccy was the machine upon which I did most of my first real world work. This was helped by the addition of the Interface 1 and ZX Microdrives in the summer of ’84 along with the first printer, a Brother HR-5 thermal ribbon printer which could output at an amazing 30 characters per second. This combination took me right through to half way through my degree, upon which I wrote most of my essays using the “Tasword 2” word processor.

During this period I made my first computer purchase mistake. During the latter months of 1984 I had been reading “Your Computer” magazine and getting more and more enthused about the Memotech MTX series machines. They were sleek (for the time) and they even professed to have a ZX Spectrum emulator in the works. Best of all, they had a built in debugger/assembler/disassembler on board just like the “professional” RML 380Z I’d seen and used at school. How could it be bad?

So, after saving up my student grant (yes, they were magical things too) by basically not having a social life in the first term at Uni. (this wasn’t a concious decision) I spent £199 on a MTX500. This was a very bad move. The machine itself was OK, but being basically an MSX machine but without the compatibility and software being expensive and hard to come by it was a bit of a lemon. The Spectrum still got more use.

And On, Into The Future.

Sinclair QL

In the January of 1986 I managed to convince my Mum that I needed something more capable to do my University work upon and so along came the Sinclair QL.

This was a major leap forward. Not only did it come with a full office suite of programs, including a word processor, spreadsheet and database application, but it also had a procedural BASIC programming language and pre-emptive multitasking. i.e. Welcome to the modern world.

Suffice it to say that this machine was invaluable for my University work, not only as a word processor upon which I wrote my degree mapping project report (I won’t go into the story of the power cut in the halls as I was writing the conclusions) but it was also used to write programs to do some of the project work, such as normative mineral analysis and plotting up data for the remote sensing coursework.

It was also the machine which really got me into low level programming and assembler. QDOS is/was a beautiful and simple operating system to code assembler on and Motorola M68000 assembler is really quite high level, the combination of which made it simple to write programs. The high-water mark of which, for me, was a full emulation of the University College London BBC Micro terminal emulator engineered from their documentation. It was a combination of a DEC VT52 emulator and a Tektronix T4010 graphics terminal emulator with access to the BBC’s *FX commands.

The QL also acted as a my development machine for many projects during my MSc in Computer Science, especially those involving assembly coding. In a way, this is THE machine I learnt the most from.

Onwards and upwards.

I’m now going to speed up a gear and skim past my first floppy disk drive in ’87, the second hand BBC Micro to play Elite in the December of the same year and even the Atari 520STM in the summer of ’88. No, the next “big thing” was the first hard disk drive in 1989.

It was a revolution! You could store huge amounts! It was fast! It was expensive! Wow!

Actually, other than the first and statements these would seem laughable today. The device was a 28MB drive for the Atari ST and cost a whopping £400. In today’s money you should probably at least double that figure. Today 28MB would seem like a pitifully tiny amount of storage, enough to hold a couple of images taken with a digital camera, but it seemed cavernous. This was helped by the fact that the ST could only use a modified version of the FAT12 file system and the hard disk drivers could only use disk partitions up to 4MB in size!

Oh, and as for the the statement, “it was fast”, well all things are relative. There was a disk speed testing program which came with the disk utilities which could measure the sped of your drive. Bear in mind that this drive was a Seagate SCSI device… the maximum read speed was about 600K/s and writes maxed out at about 400K/s! Today I am getting similar speeds from my ADSL connection and I’m not that close to the exchange.

The Technological Slow Down.

Up until now it seemed that every year brought a new wonder. Indeed, with the arrival of first Minix and then MiNT on the Atari ST and TT030, I was getting closer and closer to having a UNIX box in the house. 

My home computing before the PC era.

Before the attack of the IBM PC clones

Actually, in 1993 I picked up a Sun 3/80 via Alec Muffett and then purchased for about £500 a Seagate 425MB hard disk to get it to run and then I DID have a UNIX machine at home. Things were looking up! 

After the PC revolution

My home computing set up in 1995, after the arrival of my second PC. 486DX2-66.

It wasn’t until 1994 that I made my first steps in the “PC” world, picking up the bare bones of a 386SX machine and then sourcing the components to make a working system so that I could try out this new Linux thing and play with Microsoft Windows. Overall I think it cost me another £500 or so to get it running.

Still, it was essentially the end of the “boost phase” of home computing as far as I was concerned. At this point I had effectively, be it in primitive form, everything I have here today. I had a network (10Base2), UNIX and Linux machines, a Windows box and Internet connectivity (albeit via dial-up modem). From then on it was merely a case of a gradual improvement in speed and usability.

Until….

Enter the Age of the iDevice.

iPhone

Yes, I can say that we have now entered a new phase of the computing story. It’s both a very good and a very bad thing.

Effectively, for me this was preluded many years before I got my first Apple when I got my Palm Pilot Pro and mobile phone (Motorola MR30 brick) in ’97. But it wasn’t really a revolution until I got my first smartphone in 2003, a Handspring (later Palm) Treo 600. It only had GPRS connectivity but it was e-mail on the go! It had limited web browsing. It was amazing at the time. (It also had amazing battery life as well, but that’s another story.)

But it wasn’t until I got the iPhone 3G that I really found how mobile connectivity should be. Simple, sleek, quick and it “just worked”. The iPhone 4 was just as good.

However, the bad thing about all these devices is the way that the iDevice simplifying of devices is starting to intrude onto the desktop (and laptop) devices. Locking the users out of being able to access and program them. It’s almost as if you’re only buying the privilege to hold and use the devices rather than own them. This is a potentially slippery slope.

Anyway, I’ve been rambling on for far too long now. So, I’ll conclude this piece and look forward to hopefully another 30 years of the odyssey to come. I think it’s going to be even more evolutionary rather than revolutionary.

[Edit: 7:50pm 12th November, 2011. : Replaced stock image of Sun 3/80 with image of my computer set-up in 1994 and 1995.]

On the fly VMs: Viable security model for downloaded apps?

I’ve been thinking… always quite dangerous I know…

I woke up early this morning and couldn’t get back to sleep and for some unknown reason I started thinking about downloaded applications and how to prevent trojans getting a hold. Then it came to me, why let the application have real access to the system, especially the filesystem?

I started wondering how feasible it would be to modify the operating system to create on the fly a virtual machine which is a clone of itself within which an untrusted application is run. This VM would not have any real write access to the filesystem but instead would have a copy-on-write shadow copy of the real one. For performance reasons it would have to have pretty transparent access to the graphics sub-system but this shouldn’t be too high a security risk. Once the application had terminated the filesystem write operations could then be vetted and a risk assessment and “reputation” for the application could be determined before actually making the changes to the real data on the disk.

Later on the application could either be manually unrestricted or, if it’s “reputation” was above a certain threshold, unrestricted manually.

Anyway, it was just a thought.

[Edit] More thoughts added as a comment.

Google+: Cooking with the curate’s egg?

About a week ago I managed to get hold of an invitation to Google+, the new, not quite publicly available, in development, nascent social site Google are toying with. It’s got quite a “buzz” campaign running about it at the moment and all the Technorati are flocking to use it. But is it any good? Or, more importantly, could it become good enough to win main-stream users from Facebook?

Well, it does have a lot going for it. For a start the interface is clean and the management of the social groups is light years ahead of Facebook’s. There are issues with some of the privacy decisions made in the design, such as limited circulation posts becoming visible to those outside the initial distribution is one of the people within the circle posts a comment with public distribution. However, these are teething problems and the site is still very much under development.

There is currently no API for external applications to be built, such as games. For some people this is a major problem, for others it’s a blessing. It has been stated that a development system is being developed so I don’t see this as a road block in future.

The feel of the site has one major down side for a social site currently. The whole experience seems quite solitary. This isn’t because of the lack of people to “friends” with but more that you have no idea if any of your friends are currently on-line. You may not want to interact with them there and then but it’s nice to know that they’re about.

The other problems I see currently is that Google+ seems to be mostly gluing other Google services together. The imaging uploading and sharing is done using Picasa, which isn’t ideal for the posting of quick images on the go from a smart phone. The messaging service is a poorly integrated link to Google Chat.

One of the most interesting new facilities which could actually make people prefer Google+ over other systems could be the “Hangout” audio/video conferencing and chat sub-system. However, this is crippled by two problems currently. The first one is related to the fact that you don’t know who’s on-line at the moment. i.e. you can’t just invite those you know who are around for a chat, you have to invite blindly. The second one is that you have to download and install a plug-in for your browser for it to work.

So, do I think that it could rival Facebook in the end. Hmm… at the moment I’m not sure. There are currently too many things which make it less immediate and interactive with regards to interacting with your friends. Also, currently the reliance on glued on functionality from other Google services which don’t quite match with a social sharing system could well be a long-term problem.

So there you have it, at the moment it’s a curate’s egg, good in parts. I don’t want to damn it so early in its development but I am a little worried that the early reputation may stick. Let’s hope it does come to rival Facebook as that needs competition, especially as the developers seem to be getting into the Firefox and Gnome developer’s mind sets and changing things for change’s sake and seeing themselves as the only arbiters of good design.

Enthusing teen minds: Why today’s computers won’t create tomorrow’s programmers.

The recent 30th anniversary of the launch of the Sinclair ZX81 and the subsequent post on his blog by Jim Finnis brought back to me a recurring thought that today’s computer technology is the antithesis of that required to enthuse a teenager to want to discover and play.

The computers of the early 80s were a blank canvas. You plugged them in, switched them on and (hopefully) the input cursor blinked at you. There was no decoration, no clutter and it was something waiting for YOU to do something to it.

Not only this but with the manual which came with it a 13 year old could within 5 minutes print their name on the screen. Within 10 minutes, at least with the second generation, make a funny noise. And within half an hour he or she could have his or her name scrolling up the screen in different colours whilst making unmusical noises and annoying their parents… they were hooked!

Now, let’s look at today’s technology…

The desktop or laptop computer takes an age to start up (i.e. more than 5 seconds) and totally insulates the user from what it is.

Smartphones are usually on all the time so don’t have this problem. Similarly tablets.

They’re immediately brimming full of functionality all vying for your attention, but it’s also incredibly locked down. You can do absolutely anything… ANYTHING as long as it’s what the visionary who steered the programming teams thinks that you should want to do. Woe betide you if you want to do anything different. It’ll either ignore you or give you an unhelpful suggestion in a dialog box. You can be creative, but only in the ways you’re told you can be.

So, what about the art of programming?

Well, on tablets and smartphones forget any native fun. Apparently this is too subversive. On the desktop it’s only slightly better (and I’m not singling out any desktop OS here). What are your options?

Well, on MacOS and Linux you can open a shell window and all sorts of interpreters and compilers are available and all sorts of graphics libraries to use with them too. You would think that this would be the ideal playing ground. Sorry to burst that bubble. It’s a great playing ground if you’re already a programming expert. It’s like taking a 5 year old into an engineering workshop, sitting him down and then complaining when he doesn’t build a car as he had all the tools available to him to do it and hence it must be his fault.

No, these environments are hopeless to teach and enthuse. There’s so large an energy barrier that it’s too daunting to even try. Also, how many lines of code in one of these modern development environments would it take to do the equivalent of the following?: 

10 FOR x=1 TO 100
20 FOR y=0 TO 7
30 INK y : PAPER 7 - y
40 BEEP 1,y
50 PRINT "Noisey coloured text"
60 NEXT y
70 NEXT x

I bet you’ll find that it’s quite a large number of line of code using all sorts of weird and wonderful libraries, possibly some non-standard ones to do the sound and a whole lot of code to manage the framework to create a window with the correct attributes and define the font etc. Hopeless!

Oh, and when it comes to drawing lines and circles etc. Oh dear.

Of course a great many people think that a computer with similar functionality to the old BBC Micro or ZX Spectrum would never be able to compete in the mind of a teen when they have all that touch-screen goodness and Angry Birds to play with. I beg to differ. It was most delightfully illustrated that this is profoundly not the case in the second episode of the BBC’s “Electric Dreams” series (unfortunately not available to watch on-line) where the family was given a BBC Micro to play with. The teenage son brought his best friend home from school to play with it and they thought it was awesome. They liked that it was a blank sheet that they could make do what they wanted and not be told what they should want to do by the device. And, of course, what they wanted it to do was make silly noises and write their names on the screen in different colours. It sparked enthusiasm!

So, what can be done?

First of all we need to ignore the idealists who think everyone should start their programming life learning something worthy and object orientated. Once the kids are hooked they can learn that later. Also, that’s not how peoples’ minds work. You don’t see object orientated recipe books for a reason. Also, however annoying to the seasoned programmer, line numbers help understand the sequential way that programs work. In other words, the early 80s micro BASICs got it mostly right. BASIC does stand for “Beginner’s All-purpose Symbolic Instruction Code” after all.

Firstly, any system which is going to enthuse also HAS to have as its core functionality the “5, 10, 30 minute” teen grabbing fun element outlined near the beginning of this post. Without it the whole thing’s lost. Any system would also have to allow growth. Just as BBC BASIC allowed the nascent programmer to grow into using procedures so should any new project, and possibly more, such as variable typing, scoping etc. Line number could be made optional in an advanced mode.

Secondly, the freedom of the code itself is far less important than the freedom to discover, so any project should not use a viral license such as the GNU Public License (GPL) but instead use something such as the BSD license.

Thirdly, and helped by the above, the core should be written in a platform neutral way with the platform specific interface on top. In this case, probably the best platform to use would be the GNU compilers and specifically that implementation of Objective C with the QT libraries to interface with most operating systems (except, notably, Apple systems, especially the iPhone/iPod/iPad).

The biggest fly in the ointment with this whole pipe dream is that I just don’t have the skills to develop such a system. (Another would be getting people such as Apple to allow the system to be made available via their App Store type portals.)

So, anyone interested in starting a project? 😉

The horror! Scientific code and how not to read your arguments…

Over the years I have seen many, many examples of poor programming practise, usually kludges and quick fixes but today I saw the most horrible code for reading in command-line arguments in a C program ever. I just had to share the horror…

   if ( (argc-1) < 5 ) {
	.
	.
	.
	[ Usage error response code removed]
	.
	.
	.
   }

   /* read in command-line arguments */
           
   numFiles = (argc-1) - 6;
   sscanf( argv[ numFiles+1 ], "%s", insFileName );
   sscanf( argv[ numFiles+2 ], "%s", outFileName );
   sscanf( argv[ numFiles+3 ], "%d", &outType );
   sscanf( argv[ numFiles+4 ], "%hd", &windowStartTimeCodeword0 );
   sscanf( argv[ numFiles+5 ], "%d", &newStartLine );
   sscanf( argv[ numFiles+6 ], "%d", &newEndLine);

Now, where can I start with this? Erm, I’m a bit dumbfounded actually.

Not only does the test for the incorrect number of arguments test for the wrong number but then it uses an index from the last value to reference the other values! Of course, this means that if the wrong numbers of arguments are given then the values are put into the wrong variables. Worse, that could be read from memory the process doesn’t own.

And there’s more.. it blindly sscanf()s them into variables.

Now, you may have seen that if one argument is left off the command line the input file now becomes the executable itself and the output file is actually the input data file. This is how this came to my attention. Trying to debug the program for a student it was found that it wasn’t reading the data correctly… and the data file was mysteriously emptied of its hundreds of megabytes of data each time the program was run. Oops!

So, dear readers, have any of you ever seen a worse command line parsing code segment?

IPv4 addresses almost gone, IPv6 not finished yet. Oops!

As has been noted very widely the last couple of large blocks of Internet Protocol version 4 addresses have been assigned to the local distributors and rightly there have been a large number of people stating that we need to get ready for the transition for IP version 6.

However, there are a few niggly little problems due partly to do with IPv6’s design and partly by tardy implementation, neither of which impact upon the general public and their edge networks but will impact upon the security and management of more corporate networks.

So, what are these two problems? Well, they’re both to do with network address assignment, one of which is a foolish design decision in the protocol itself which has a whole host of unintended consequences related to it.

The feature I’m talking about here is the stateless address assignment where a client machine will self-assign its address and self-discover the route out to the wider Internet. On the face of it it seems like a brilliant idea which will liberate the normal user from worrying about setting up IP addresses and all that tedious and confusing networking stuff, it all “just works”. Brilliant! And, in a perfect world, where everyone is smiley, helpful and trustworthy it would be. It’s a pity that the real world isn’t like that. Having said that, this doesn’t really affect personal networking within peoples’ homes but it does greatly affect the security and policing of corporate networks.

At this point it’s probably best to describe how security and policy are implemented, with regards to network addresses and packet routing in IPv4 networks so as to allow you to contrast the differences and the problems inherent in the self-assigned address world of IPv6. Currently a computer can either be manually assigned an address and network route which then has to be configured directly on the computer in question or it can be assigned automatically from a centrally managed Dynamic Host Configuration Protocol (DHCP) server. In the latter case it’s not only the network address and route information which can be given to the computer but other information such as its host name and various other items which it can use to interact correctly with the rest of the network. The centrally managed DHCP server can also tell any computer it doesn’t know (or the administrators don’t want to have network access) to bog off and hence not get network access. Using this very useful system administrators can assign different outgoing network routes for different sets of client machines which can help with load balancing and various other advantageous policies that only humans with an overview of the whole network can see.

As you can see, IPv6’s self-assignment of addresses and self-discovery of network routes by-passes all this control. If you add to this certain client operating systems being “helpful” and offering network tunnels out of the current network for IPv6 clients to the outside world and offering their services as routers it becomes a security nightmare as local outgoing firewall policies and protections are subverted.

Now, this problem has been foreseen, if belatedly, by a group who have, against the uproar of the IPv6 purists, defined an IPv6 version of DHCP. (Note: the purists hate it because it breaks their ideological tenet that all network peers should be equal and free to do as they wish.)

So, surely this means that IPv6 is ready? Erm, no. You see DHCPv6 is only currently a paper exercise. The technical details have been hammered out and the specification documents (RFCs) have been posted but there are no implementations out there. Ooops!

So, what does this mean for the whole IPv4 to IPv6 transition? Well, it means that internal corporate networks will not be able to change to the new protocol and will be forced to live behind an IPv4 to IPv6 network address translation (NAT) gateway. (Note 2: IPv6 purists cringe even more about this technology, they see NAT as the spawn of the devil as it stops all peers being equal and being able to talk directly with every other.)

I can foresee the transition from IPv4 to IPv6 being a long one with to start with only those machines which live in the no-mans-land where external services live and the core Internet changing over to IPv6 and everything else being behind huge NAT gateways. Internet Service Providers (ISPs), whose customers don’t generally have fixed network addresses anyway, will sit all their customers in IPv4 bubbles and this state of affairs will ossify. All web sites will be forced to use IPv4 compatible addresses.

Eventually, after many years, all the tools and security issues with IPv6 will be sorted out and slowly, very slowly, the corporate world will change their networks one by one, but there will always be “legacy” IPv4 networks in there, well at least for 20 years or so. For ISPs the transition will be quicker. They’ll probably have to begin with a separate product for IPv6 users or merely provide IPv6 gateway routers to new customers (quite probably to begin with using an IPv4 NAT bubble for the home network as quite a bit of embedded A/V equipment will not be IPv6 capable). I can foresee that even this transition will take a good decade. During this time all web servers will have to be on IPv4 mappable addresses.

It’s going to be a very long haul and expect things to break horribly.