Monday, 30 December 2013

Slowtake QuickTake!

A Digital Preservation Story.

About 7 years ago a Manchester friend, Sam Rees gave me an Apple QuickTake 150; one of the earliest color digital cameras from around 1995, but he didn't have the right drivers so I've never known if it works or if it's just junk. A few months ago I tracked down the drivers on the Macintosh Garden website so yay, in theory I could test it!

But obtaining the drivers is only a small part of the problem. The QuickTake only works with Macs before 1998, and even if you have one you have to find a compatible media to transfer the downloaded drivers in the right data format. All this is challenging. The download itself comes as a .sit (Stuffit) file, which Modern Macs don't support as standard. When you decompress them you find that inside, the actual software and drivers are disk image files, but not in a disk image format that is understood by the older Mac I have (a Mac running Mac OS 9 could work, but my LCII only runs up to Mac OS 7.5.3).

In the end I used a 2002 iMac to decompress the .sit, because at least that was possible. The plan was to connect a USB Zip 250 drive to the iMac, copy the images to a Zip 100 disk, then use a SCSI Zip 100 drive on the LCII to load in the drivers.

However, I couldn't convert the floppy disk images to Disk Copy 4.2 format for my LCII, so I took a chance that simply saving the files in each floppy disk image as a set of folders might work.

Even getting an old circa 1993 Macintosh to work is a challenge. I'm fortunate in that I have an old, but still working SCSI Hard Disk. But, I still needed a special Mac to VGA monitor adapter to see its output (which I connected to a small LCD TV) and still had to spend some time hunting down the right kind of SCSI cable (D-type to D-type rather than D-type to Centronics) to hook up the Zip 100 drive.

After all this & the 30minutes it took to install all the QuickTake software (yes, just putting all the files in folders worked!) I was finally able to test it (no manuals, had to guess) and with a bit of fiddling was able to load wonderful fixed-focus VGA images from the camera in mere seconds (each image approx 60Kb). Opening and decompressing them took about 90s each on my slow LCII though!

Here's a picture of my family and our cats taken with the QuickTake 150 December 28, 2013. I used the 10s timer mode to take the photo, with the camera balanced on a book on an armchair - so apologies for the framing :-)

As you can see, the clarity of the image is actually pretty good. The native image required roughly 64Kb, which given an original 24-bit image means the QuickTake camera must have compressed images by about 14x.

When viewed on the LCII, the images appeared rather speckled due to the PhotoFlash software implementing a fairly crude dithering algorithm (simulated here using GIMP).

Thus ends a 7 year quest to test an Apple QuickTake 150 digital camera, thanks Sam!

Tuesday, 10 December 2013

Z80 Dhrystones

In the early 80s, my schoolmate David Allery's dad's workplace had a pdp-11/34, a minicomputer designed in the 1970s. All the reports at the time implied that a pdp-11 anything had absolutely awesome performance compared with the humble 8-bit computers of our day.

Yet decades later, when you look at the actual performance of a pdp-11/34, it seems pretty bad in theory. You can download the pdp-11 handbook from 1979 which covers it.

First, a brief introduction to computer processors, the CPU, which executes the commands that make up programs. I'll assume you understand something of early 80s BASIC. CPUs execute code by reading in a series of numbers from memory, each of which it looks up and translates into switching operations which perform relatively simple instructions. These instructions are at the level of regn=PEEK(addr), POKE(addr,regn), GOTO/GOSUB addr, RETURN; regn = regn+/-/*/divide/and/or/xor/shift regm; compare regn,regm/number. And not much else.

The pdp-11 was a family of uniform 16-bit computers with 8x16-bit registers, 16-bit instructions and a 16-bit (64Kb) address space (though the 11/34 had built-in bank switching to extend it to 18-bits). The "/number" refers to the particular model.

On the pdp-11/34, an add rn,rm took 2µs; add rn,literalNumber took 3.33µs and an add rn,PEEK(addr) took 5.8µs. Branches took 2.2µs and Subroutines+Return took 3.3µs+3.3µs.
That's not too different to the Z80 in a ZX Spectrum, which can perform a (16-bit) add in 3µs; load literal then add in 6µs, load address then add in 7.7µs; Branch in 3.4µs and subroutine/return in 4.3µs+2.8µs.

So, let's check this.

A 'classic' and simple benchmarking test is the Dhrystone test, a simple synthetic benchmark written in 'C'. A VAX 11/780 was defined as having 1 dhrystone MIP and other computers are calculated according to that.

If you do a search, you'll find the pdp-11/34 managed 0.25 dhrystone MIPs. To compare with a ZX Spectrum I used a modern Z80 'C' compiler: SDCC; compiled a modern version of dhrystone (changed only to comply with modern 'C' syntax) and then ran it on a Z80 emulator. I had to modify the function declarations a little to get it to compile as an ANSI 'C' program, but once it did I was able to ascertain that it could run 1000 dhrystones in 13 959 168 TStates.

The result was that if the emulator was running at 3.5MHz, it would execute 0.142dhrystone MIPs, or about 57% of the speed of a pdp-11/34. Of course perhaps a more modern pdp-11 compiler would generate a better result for the pdp-11, but at least these results largely correlate with my sense that the /34 isn't that much faster :-) !

Compiling SDCC Dhrystone

SDCC supports a default 64Kb RAM Z80 target, basically a Z80 attached to some RAM. I could compile Dhrystone 2.0 with this command line:

/Developer/sdcc/bin/sdcc -mz80 --opt-code-speed -DNOSTRUCTASSIGN -DREG=register dhry.c -o dhry.hex

The object file is in an Intel Hex format, so I had to convert it to a binary format first (using an AVR tool):

avr-objcopy -I ihex dhry.hex -O binary

SDCC also provides a z80 ucSim simulator, but unfortunately it's not cycle-accurate (every instruction executes in one 'cycle'). So, I wrote a simulated environment for libz80, which turned out to be quite easy. I used the following command line to run the test:

./rawz80 dhry.bin 0x47d

The command line simply provides the binary file and a breakpoint address. The total number of TStates is listed at the end.

The entire source code is available from the Libby8 Google site (where you can also find out about the FIGnition DIY 8-bit computer).

So Why Did People Feel The Pdp-11 Was So Fast Then?

By rights the pdp-11 shouldn't have been fast at all.
  1. The pdp-11 was typical for the minicomputer generation: the CPU (and everything else) was built from chains of simple, standard TTL logic chips, which weren't very fast.
  2. It was designed for magnetic core memory and that was slow, with a complete read-write cycle taking around 1µs.
  3. It was part of DECs trend towards more sophisticated processors, which took a lot of logic and slowed it down further. 
But (3) is also what made it a winner. It's sophistication meant that it was a joy to program and develop high-quality programming tools for. That's probably a good reason for why both the language 'C' and the Unix OS started out on a pdp-11.

By contrast, although early 8-bit microprocessors were built from custom logic and faster semiconductor memory, the sophistication of the CPUs were limited by the fabrication technology of the day. So, a Z80 had only 7000 transistors and an architecture geared for assembler programming rather than compiled languages.

And there's one other reason. The pdp-11 supported a fairly fast floating-point processor and could execute, for example, a floating point multiply in typically 5.5µs, something a Z80 certainly can't compete with.

Monday, 24 June 2013

Calgary Flooding Podcast Transcript

Hi folks,

I had just finished a facebook status on India's recent 60-year flooding event (just a fortnight or so after Central Europe's multi-century flooding event) when I discovered that 70,000 had been evacuated from Calgary because of yet another flooding event. 

Most media reports (but not this one) are bending over backwards to play down the the connection, but there's an awful lot of 'freak' weather, going on these days. Bob Stanford's interview on Anna Maria Tremonti's podcast about Calgary's flooding just a few days ago was a superb description of the state of play of the current state of  climate science and extreme weather events.

It's so informative, I thought I should provide a transcript.

Anna Maria Tremonti: 'Well Bob Sanford lives in Canmore too, but he's in Winnipeg this morning. He's been trying to make sense of these and other severe floods and he's come to a disturbing conclusion. Bob Sanford is the chair of the Canadian Partnership Initiative of the United Nations Water For Life Decade and the author of "Cold Matters - The State and Fate of Canada's Fresh Waters." ' Good Morning!"

Bob Sanford: " Good morning Anna."

Anna: "Well, what do you make of what we're seeing across Southern Alberta this morning?"

Bob Sanford: "Well, to scientists working in the domain of climate effects on water this is, really the worst of all possible outcomes. We built on flood plains because we thought we had relatively stable climate, the climate that we've experienced over the past century. We thought it would stay the same. We also thought that we had a good grasp of how variable we could expect climatic conditions to be based on what we've experienced in the past century.

And now we've discovered that neither assumption was correct. We do not have adequate means to protect development on flood plains; climatic conditions are more variable than we thought and that variability is increasing as climate changes and we've also discovered that our hydrologic conditions are changing."

Anna: "So what do floods like this tell us about what's happening with our water cycles?"

Bob Sanford: "Well if we put all of the data together they tell us that warming temperatures are altering the form that water takes and where it goes in the hydrosphere. Evidence that increasing temperatures are accelerating the manner and rate at which water is moving through the hydrological cycle is now widely enough available to allow us to connect the dots with respect to what's happening in Canada. So let's start very briefly in the Canadian Arctic. In the North and throughout much of the Canadian Boreal, water that's been trapped as ice in the form of glaciers and as permanent snow pack and permafrost is, is in decline. And the same sort of thing is visibly evident in Canada's Western mountains. There's now evidence that we've lost as many as 300 glaciers in the Canadian Rockies alone between 1920 and 2005. And the same thing that's causing our glaciers to disappear is (in combination with landscape change) changing precipitation patterns on the great plains.

And the same warming is causing water left on the land after the last glaciation in the great lakes region to evaporate. So, you might well ask 'where's all this water going?'  And one of the places it's going is into the atmosphere where it becomes available to fuel more frequent and intense extreme weather events such as the one that you had in Toronto in 2005 that cause $700m [Canadian] worth of flood damage to infrastructure, roads and homes. And you may remember that, in that year, that Calgary just dodged the same kind of bullet - well - not this time. And what we're seeing here is that rising temperatures and the increasing concentration of atmospheric vapour are making what were once predictable, natural events, much worse and what we've discovered is that the atmosphere holds about 7% more water vapour for each degree celsius temperature increase.

And what this tells us is that the old math and the old methods of flood prediction and protection won't work any more. And until we find a new way of substantiating appropriate action in the absence of this hydrologic stability, flood risks are going to be increasingly difficult to predict or to price, not just in Calgary or Canmore, but everywhere."

Anna: " So you're saying, then that there's more condensation in the air. Warm air can hang on to water longer and then - burst when it hits somewhere that can no longer hang on to it?"

Bob Sanford: "Well, warmer atmosphere is more turbulent and it carries more water vapour. And we're seeing that happening widely. We're also seeing in North America disruption in the Jet Stream which is allowing climatic events to cluster and remain in places for longer periods of time, resulting in more intensive floods and droughts. And we're seeing this as a result of the general warming in the atmosphere."

Anna: "And you've said that this is because of Climate Change. How do we know that this isn't just a fluke, an outlier?"

Bob Sanford: "Well, we know that Classius Clapeyron relation is one of the standard logarithms, or algorithms that we use in Climate Science. And we know that as the temperature increases we know what we can expect in terms of water vapour increases in the atmosphere and we're beginning to see some very interesting phenomenon associated with this. Things like atmospheric rivers. Great courses of water vapour aloft that can carry between 7 and 15 times the daily flow of the Mississippi and when these touch ground or are confronted by cooler temperatures that water precipitates out and what we see is huge storms of long duration and the potential for much greater flooding events."

Anna: "So, what you're saying is this part of a blot or pattern across North America."

Bob Sanford: "Well, unfortunately, this may be the new normal. I regret to say that everything we know about how climate affects the hydrologic cycle supports or suggests that events like this are likely to be more common. And the insurance industry has already warned us of a trend towards more intense and longer duration storms that cause more damage especially in areas of population concentration. And this is certainly what we're seeing in the Calgary Area."

Anna: "What are you hearing from people you know in Canmore?"

Bob Sanford: "Well, there's a great deal of concern about how long this event is going to last and, well we heard from residents there this morning on your show, there is a deep concern about how much damage there has been done to very expensive infrastructure, roads and bridges. So we're going to have to wait until the storm is over to determine exactly the extent of those damages."

Anna: "What should we be doing to address the situation you're describing?"

Bob Sanford: "Well, I think that it's important to recognise that the loss of hydrologic stability is a societal game-changer. It's already causing a great deal of human misery widely. So we're going to have to replace vulnerable infrastructure across the country with new systems designed to handle greater extremes and this is going to be very costly. We're also going to have to invest more in the science so that we can improve our flood predictions."

Anna: "As you look at what's unfolding across Southern Alberta - not surprising to you? Surprising? The residents there certainly are saying it was completely unexpected."

Bob Sanford: "Well, I don't know if it was entirely unexpected. We know that there's great variability in our climate naturally. But we also know that some of these influences are affecting the frequency of these storm events. And researchers at University of Saskatchewan's Kananaskis research centre  have predicted already that events of this sort will be more common.

No-one likes to be right on such matters, but it appears that these are going to be events that we're going to see more frequently in the future."

Anna: "Um-hmmm, that's a rather grim forecast. No pun intended."

Bob Sanford: "It is grim, but I think that if we accept what we see happening right in front of our very eyes is real then we can begin to adapt and begin to rethink about how we situate our homes and our infrastructure and flood plains. We can begin to think about how we're going to adapt to more extreme weather events it's not certainly outside of the domain of human possibility to do so and we should be acting toward that direction."

Anna: "Well Bob, good to talk to you. Thanks for your time this morning."

Bob Sanford: "Thank you."

Thursday, 28 February 2013

Raspberry Pi Plum Pulling

I'm disappointed that Eben Upton, Technical Director of Broadcom, that make the patent-protected, closed-source BCM2835 chipset inside the Raspberry PI has chosen to use his product to promote free-trade ideology in developing countries.
"..A less positive experience has been the impact of state-monopoly postal services and punitive tariffs - often as high as 100% - on availability in markets such as Brazil.
There, a $35 (£23) Pi will currently cost you the best part of $100 (£66).
I believe that these measures, aimed at fostering local manufacturing, risk holding back the emergence of a modern knowledge economy in these countries."

 I've just been reading 23 things they don't tell you about Capitalism, by Ha-Joon Chang (a Korean economist based in Cambridge, like Eben... perhaps they should meet up in a pub sometime?). One of his key points is how rich countries got rich by creating tariffs, which they then expect poorer countries to forgo under the pretense that it'll improve poor economies.

The problem is that it's one rule for us, and another rule for them. Broadcom, the BCM2835 and to an extent, the Raspberry PI is a product of this; it creates low-barriers to entry for wealthier entities like Broadcom and high-barriers to entry for less wealthy entities, like Brazil or ... me.

For example, the Raspberry PI (which I believe is a great idea) is roughly the same price as a FIGnition, in effect, about £25 compared with £20 for my product. Yet Raspberry PI is about 10,000 times more powerful (and complex) than FIGnition. That's because firstly, the Raspberry PI (Model B) is built in China (which reduces costs). 

Secondly, it's massively mass-produced (which reduces costs).

Thirdly, they have the sophisticated tools to both design the chipset and the PCBs - a hobbyist can't build one at all even if given the components, a perfectly steady hand and a microscope (an infinite barrier for a hobbyist).

Fourthly, they have access to all the design documents - Broadcom don't even publish them for the general public (another infinite barrier for hobbyists).

Fifthly, they have volume access to suppliers, at a much reduced (secret) cost, which the general public (like me) don't have access to.

So you can see, there are barriers I face that others don't. But this is OK for me - I wouldn't produce a Raspberry PI instead of a FIGnition anyway, because I want people to build computers that can be understood - and that means building rudimentary computers that can be reproduced by the general public. That's because I believe that in the same way we teach kids to read using phonics rather than War And Peace and we don't believe that books have superseded the alphabet; if we want kids to master technology, they really need to start at the basics.

Similar barriers though apply to developing countries. There are inbuilt barriers to them, starting with for example, the fact that the brightest people in developing countries end up in places like Cambridge. Developing countries therefore often want to protect their own markets, for example, Ghana likes to grow and eat their own chickens, but instead Europe dumps our excess chickens on them, so they can't develop their own market.

It's unfair. But it's doubly unfair for a protectionist company Broadcom that makes use of cheap Chinese labour to force open markets in developing countries for their own benefit. Surely, surely, if Broadcom really did want Brazil to have lots of cheap Raspberry PIs, they'd simply give the information to Brazil to procure their own production runs - ship the raw chips to factories in Brazil and let them make their own PIs. There would still be the chip trade barrier, but not the for the added value of board production and assembly.

Problem solved - and the PI would be used for its intended purpose, not for sticking in thumbs and pulling out plums. Please Eben, you're a nice guy (as I recall from the BBC@30 event); don't use the PI as a free-market political tool :-)

Monday, 11 February 2013

The Big Wedgie

This is just a short blog about an interesting development in the Arctic. I keep a frequent lookout on websites such as NSIDC's Arctic Sea Ice News and importantly Neven's Sea Ice Blog.

One of the big predictions for forthcoming years is the collapse of the Arctic ice cap which may happen as soon as in just a few years. This graph makes it quite clear what will happen:
It's a scary graph and implies that we'll have a September minimum of 0Km3 as early as 2015; an August ... October minimum of 0Km3 as early as 2016 and a July minimum of 0Km3 as early as 2017. That is, a rapid collapse of the Arctic sea ice. However, there are wide error bars, and so future predictions should be treated cautiously.

In the arctic much of the ice disappears every year (first year ice), but some remains (multiyear ice). The resilience of Arctic sea ice depends upon multi-year ice, because it's thicker. Most of the multiyear ice was lost in 2007 and has progressively depleted since then.

As you can see, most of the remaining multi-year ice (about 20% of ice >=4 years) clings to the North Coast of Greenland and islands North of Canada and the thinking is that any ice that clings on beyond 2016 or so will be there. This might not happen. Here's why. There's regular arctic ice being churned out from the Fram Strait, the sea between Greenland and Svalbard thanks to Arctic ocean currents that head up North round  from the Atlantic (the same currents that give the UK warm weather).  You can see it here:

It's thought by some Arctic observers that the multi-year ice is held in place at the top of Greenland by what's called the Wedge. This may get swept out through the Fram strait in just a couple of days. It'd be the big Wedgie for the Arctic and would have serious consequences for the remaining multi-year ice and whether the Arctic sea ice would in fact trend to around 1MKm3 or nothing at all. Here's a model of the process:

The reason why it can get swept out now, is because the ice is so thin elsewhere in the Arctic, on average, it's just over 1metre thick. Since we know that with thinner ice sea currents have more opportunity to influence Arctic sea ice and in recent weeks observers have noticed a number of cracks appearing in the Arctic sea ice, early than they would be expected (large cracks do occur in the ice, just not normally this early, here):
(It's a false-color image so you can see the contrast more easily, Greenland is bottom right, cracks are shown white against orange).
Of course, it might not actually happen - I'll post a comment in a few days if it does!