Warbook: An interesting ecological experiment on Facebook.

I’ve been playing Warbook, an added application on the Facebook site and it has shown to be a good parallel with a forest ecology in some respects. Here’s why I think that:

In a mature forest the large trees take so many of the resources that at ground level hardly anything can grow. Seeds that germinate just can’t get enough energy to survive. It is only when a large tree falls that the opportunity exists for the seedlings to scramble for the new space and grow.

In Warbook the “energy” a player has is determined by their income which is generated by their free land (which they have to buy) with some addition from mines (which take free land). This land resource can be scavenged by other players who can attack other players. The player being attacked can defend using their army, which they also have to buy and upgrade. The problem for the “germinating” new players is that before you can buy and keep enough land to generate enough wealth to flourish in the game and build to the point where you can attack other players (there’s a minimum size of land holding you have to have before you can attack) you have to be able to build up a large enough army with a high enough defence strength to hold onto that land. Of course, for those who got into the game early the number of attackers was far fewer and the level of attack strength for those attackers was far lower than later in the game.

The point is that now it’s almost impossible for any new players to build up the resources needed. Larger players are “harvesting” the land resources of these new players with armies which are so powerful that there is no way the new players can defend against them and both their land and their armies are being depleated at a rate far higher than they can afford to replenish them. Just like the large trees in a forest suppressing the undergrowth. The difference with Warbook, however, is that the “big trees” aren’t restricted in their area of influence and can (and do) sap the energy of any and all opposition. It doesn’t matter, therefore, if one “big tree” dies as no space is made in the ecosystem to give  an opportunity for the saplings to develop. In other words, the game rules are fully biased in favour of early adopters and new players are essentially excluded. There are no niches in the ecosystem for such players to inhabit.

For me, the game is both interesting (in an academic sense) and highly frustrating (‘cos I can’t get to the point where I can even play effectively). Oh well. 🙂

ZFS: How its design seems to be more trouble than its worth.

Now, let me say this first; ZFS seems like a wonderful thing. In fact, it is wonderful except for a couple of things, which makes it totally undeployable for our new server. Actually, let’s put this another way. One thing makes it impossible because the ZFS way of doing things is mutually exclusive with the way our system (and probably a huge number of other legacy systems) works.

The main bugbear is what the ZFS development team laughably call quotas. They aren’t quotas, they are merely filesystem size restraints. To get around this the developers use the “let them eat cake” mantra, “creating filesystems is easy” so create a new filesystem for each user, with a “quota” on it. This is the ZFS way.

Unfortunately, this causes a number of problems (above the fact that there’s no soft quota). Firstly, no instead of having only a few filesystems mounted you have “system mounts + number of users” mounted filesystems, which makes df a pain to use. Secondly, there’s no way of having a shared directory structure with individual users having separate file quotas within it. But finally, and this is the critical problem, each user’s home directory is now a separate NFS share.

At first look that final point doesn’t seem to be much of a worry until you look at the implications that brings. To cope with a distributed system with a large number of users the only managable way of handling NFS mounts is via an automounter. The only alternative would be to have an fstab/vfstab file holding every filesystem any user might want. In the past this has been no problem at all, for all your user home directories on a server you could just export the parent directory holding all the user home directories and put a line “users -rw,intr myserver:/disks/users” and it would work happily.

Now, with each user having a separate filesystem this breaks. The automounter will mount the parent filesystem as before but all you will see are the stub directories ready for the ZFS daughter filesystems to mount onto and there’s no way of consolidating the ZFS filesystem tree into one NFS share or rules in automount map files to be able to do sub-directory mounting.

Of course, the ZFS developers would argue that you should change the layout of your automounted filesystems to fit with the new scheme. This would mean that users’ home directories would appear directly below /home, say.

The problem here is one of legacy code, which you’ll find throughout the academic, and probably commercial world. Basically, there’s a lot of user generated code which has hard coded paths so any new system has to replicate what has gone before. (The current system here has automount map entries which map new disks to the names of old disks on machines long gone, e.g. /home/eeyore_data/ )

The ZFS developers don’t seem to see real-world problems, or maybe they don’t WANT to see them as it would make thier lives more complicated. It’s far easier to be arrogant and use the “let them eat cake” approach rather than engineer a real solution to the problem, such as actually programming a true quota system.

As it is, it seems that for our new fileserver I’m going to have to back off from ZFS and use the old software device concatenation with UFS on top, which is a right pain and not very resilient.

Back from whence I came, Houghton Conquest 25 years on.

This morning my plan for the day consisted of lots of house work. However, having seen the weather I thought “Blow it! Who knows when there’s going to be anouth nice, warm day this year. I’m off for a drive.”

Having wondered where to go I had the idea of driving over to Ampthill Great Park and having my lunch sat looking over the Great Ouse valley towards Bedford and then having a wander around the village I grew up in. So off I toddled.

It’s amazing to me that I left there a quarter of a century ago. A time ten years longer than the period I actually lived there, but seemingly nowhere near as long. I still remember talking to Andrew Walpole on his drive in preparation for a bike ride as if it were yesterday. The brain’s a curious thing.

Anyway, back to today…

The weather did cloud over on the way, but I didn’t let that spoil things. It was still warm enough, though it did make the light levels a pain for photography. Anyway, as I said, I had a picnic on Ampthill Great Park (so called because it is a royal park, just as Winsor Great Park, though there are no royal buildings left).

The park is the location of Ampthill Castle, one of the places Catherine of Aragon, King Henry VIII’s first wife, was placed after the divorce. In honor of this, a local member of the gentry had a memorial erected in the 18th Century.

After lunch I drove over to Houghton Conquest and parked outside the Post Office which, to my surprise, was actually open on a Sunday. That’s very much a change from the “old days.” The main changes within the village over the time since I left have mostly been the addition of new housing estates. One largish one filled in the area between the High Street and the Bedford Road, which runs perpendicularly to it. The other two estates replace the two dairy farms, which used to be owned and run by the London Brick Company. The farm down Rectory Lane had been derelict for years and the last time I remember cattle being milked there was in the early 1970s.

I took about half an hour’s wander around the village before getting back to my car and driving home. A far more interesting day than doing housework! 🙂