Saturday 6 April 2024

Space Shuttle CPU MMU, It's Not Rocket Science

The NASA Space Shuttle used a cut-down IBM System/360 influenced CPU called the AP-101S aimed mostly at military aeronautical applications. It's fairly weird, but then again, many architectures from the 1960s and even 1970s are weird.

I've known for a long time that the Space Shuttle initially had an addressing range of about 176K and because one of the weird things is that it's 16-bit word addressed (what they call half-words), this means 352kB. Later this was expanded to 1024kB (i.e. 512k half-words). How did they do this?

You might imagine that, being jolly clever people at NASA, they'd come up with a super-impressive extended memory technique, but in fact it's a simple bank-switched Harvard architecture where the address range for both code and data is split in two and 10 x 4-bit bank registers are used to map the upper half to 64kB (32k half-word) banks.

So, the scheme is simple and can be summarised as:

Two of the 10 bank registers are placed in the 64-bit PSW in the BSR and DSR fields. The other 8 DSE registers are used only when the effective address of an instruction uses a Base Register that's not 0 (which is when the effective address is of the form [BaseReg+offset] or [BaseReg] or [BaseReg+IndexReg+Offset]).

Documentation

The documentation, for this, however is overly convoluted, wordy, and repetitive. The best source I could find is here, which seems to be the same document twice where the second version is the better one, but written in 1987 using an early 80s word processor (WordStar?) instead of being typeset.

There's a bizarre memory diagram on page 2-19, which can only be easily understood by people who already understand the system, which proceeded a less incomprehensible, but difficult to visualise, flow-diagram description of the MMU:

Both of these diagrams are trying to say the same thing.

Surrounding this stuff is several paragraphs of badly-worded text explaining "banked-memory". It's obviously badly-written, because the official text needed hand-written corrections! I had to trawl through it to check if the bank-switching worked as it appeared to, but it does. It's all very odd.

Bank-switching was a standard technique for expanding memory in late 1960s minicomputers (like the Data General Nova series or the DEC pdp-11) as well as a plethora of 8-bit computers (including CP/M, BBC Micro, Apple ][, Apple ///, Sinclair and Amstrad) and then again in the late 1980s on IBM PCs and clones as affordable RAM started to exceed 1MB. People knew how to both expand memory with bank-switched techniques and just as importantly, describe it. Most bank-switching diagrams look a lot like mine (mine are inspired by DEC pdp-11 user manuals).

So, why is NASA's documentation is so poor here? My most plausible explanations are:
  1. Budget cuts and pressure in the 1970s and 1980s led to poor documentation. This can be checked by reading the quality of the documentation prior to this mid-80s document and/or later if standards improved.
  2. Justification: all NASA hardware, including the AP-101S was expensive, so convoluted documentation helps convey the idea that the public and NASA were getting their money's worth: if you can't comprehend it, you won't grasp how simple and standard it was.
  3. Small developer-base: documentation tends to suffer when not many people are developing for a given product. That's partly because there's a lack of resources dedicated to documentation, but it's also because documentation is frequently passed on verbally (what I sarcastically call Cognitive documentation, i.e. no documentation 😉 ). I don't know the size of the Shuttle software developer team, but I guess it was in the low hundreds at any one time, because although the memory space was only a few hundred kB; I believe they loaded in various programs to handle different mission stages (e.g. ascent, docking, orbital corrections, satellite deployment, landing) and that means if there's a few MB of code, that's about 300,000 lines and given the safety requirements, perhaps only a few thousand lines per developer.
Nevertheless, I don't think the poor documentation is excusable - it implies the software quality is worse than is often claimed.

Conclusion

NASA engineers are often lauded for their genius. There's no doubt that NASA has to employ clever people, but like all computer engineers they make the same kinds of mistakes as the rest of us and draw upon the same kinds of solutions. The AP-101S was a 32-bit architecture cut-down into a hybrid 16-bit / 32-bit architecture. Part of the initial simplification was to limit addressing to 16-bits, but like every architecture (including NASA's AGC from a decade earlier), requirements exceeded that addressing capability. And even though the index registers were 32-bits, prior decisions (like the 24-bit addressing on early 32-bit Macintosh computers) forced more complex solutions than merely using more bits from the registers.

There are not many techniques designers can use to extend memory addressing, and NASA picked a common (and relatively conventional) bank-switching one. For reasons I'm not sure of, their documentation for it was pretty awful, and despite their best efforts at software standards, both the bank-switching mechanism and the documentation would have, sadly, reduced the quality of Space Shuttle software.



Thursday 28 March 2024

Mouse Detechification! A PS/2 To Archimedes Quadrature Mouse conversion!

After an embarrassing amount of work I managed to convert a PS/2 mouse into a quadrature encoding mouse for my Acorn Archimedes A3020.


A Mini-Mouse History

Doug Englebart designed the first mouse, for his (at the time) very futuristic NLS demonstration in 1968 (it was built by a colleague, Bill English). It had one button, was brown (like real mice :-) ) and used two wheels, which must have made it scrape quite badly when moved diagonally:


Xerox picked up the concept of a Mouse for their pioneering Alto workstation project. They shrunk the wheels and then added a ball at the bottom which could drive the wheels easily in any direction.


Other early mice from Apple, Microsoft, Smaky, Logitech, AMS, Atari, Sun, ETH, Commodore (Amiga) and Acorn were variations on this kind of design: Quadrature-Encoded mice, using a ball to rotate two, orthogonal wheels and minimal electronics to convert the signals (often just a simple TTL Schmitt-trigger buffer).

Each wheel (one for left/right and the other for up/down) had about 100+ spokes in it and an infra-red LED shone light between the spokes to a pair of sensors (per axis) at the other side. As the ball turned and moved a wheel, a spoke would block the light to the sensors and then let the light through when it passed. However, because the sensors were offset from each other, a spoke would block light to Sensor 1 before blocking light to Sensor 2 and then let light through to sensor 1 before sensor 2. So, the sensors would see this if you moved right (or up):


So, you can tell if you’re moving right (or up), because Sensor 1 gets blocked first each time and left (or down) if Sensor 2 gets blocked first.

The problem is that it takes a lot of wires for a mouse to work this way and requires a surprising amount of computer power to handle all the interrupts, so later mice designs offloaded that to a microcontroller inside the mouse which then communicated with the computer using a simpler, 1 or 2 wire serial interface: these were called PS/2 mice on PCs (Apple had a design called ADB, Apple Desktop Bus). Eventually, USB mice replaced them and optical mice replaced the ball and wheels.

I know I could have bought a USB to quadrature encoded mouse adapter (from here on, QE mouse, because I don't like the term Bus-mouse), but that seemed like a cop-out, so instead I decided to make things as painful as possible. The first stage was to cut out the blobtronics that made it a PS/2 mouse.


Then I added some pull-down / level-shifting resistors to see if the A3020 would recognise it as valid values but it didn't. Then I went through a long process of getting an Arduino to recognise the analog voltages from the phototransistor (yep, I also know I could use a Schmitt-trigger buffer chip) and write a small program to actually process them into quadrature encoded X and Y grey code (Ref: Dir are 00, 01, 11, 10, 00... for forward/up and 00, 10, 11, 01, 00.. for backward/down). It turned out it wasn't very digital!


I figured I could solve the problem in software using a small MCU that would fit in the mouse, so I chose an ATTINY24, a 14-pin AVR MCU with 8 ADC 10-bit channels and 4 other digital IO pins. I used a simple hysteresis algorithm: it first figures out the range of values it can get from the phototransistors; then allows a bit more time to see if the range is bigger; then once it's interpreted any given state, the ADC values have to change by 2/3 of the range to flip into the next state.

I went through quite a bit of a debugging process, because obviously when you go from a fairly lush environment like an Arduino (plenty of libraries, plenty of IO pins, lots of code space and relatively abundant RAM (2kB ;) )) to an ATTINY24 (2kB flash, 128 bytes of RAM, just 12 IO pins) and then stick it in a mouse, the fewer opportunities you have for debugging - and you can't even use an LED inside a trad, ball mouse, because ta-dah, you won't see it! So it's best to debug as much as possible in simulation, before you take the final step. In fact the program only ended up being about 906 bytes, because 'C' compiles pretty well on an 8-bit AVR.

I made a lot of mistakes at pretty much every stage - but amazingly getting the mouse directions inverted wasn't one of them :) . I started with 10K resistors for the phototransistors, but then ended up deducing 15K was best (22K || 47K) so that was 8 resistors! When soldering the analogue connections to the AVR I was out by one pin all the way along, because from the underside, I mistook the decoupling cap pins for the Pin1 and Pin 14 of the AVR. I had to desolder the phototransistor connections to the main cable - which I'd been using when analysing on the Arduino; then solder up the photo transistors to the analog inputs and then the digital outputs to the original wires. I tried to keep the wires no longer than they needed to be because it was cramped in there, but I ended up making the Y axis analog wires just 1mm too short (because, d'uh they get a bit shorter when you strip them and feed them through a PCB) so they had to be redone. Because there were a lot of wires I needed to glue them down as otherwise the casing wouldn't close, and I was particularly concerned about the phototransistor wires getting caught in the Y axis wheel, but then I glued them down directly underneath that wheel so it couldn't clip into place! I also glued the grey and green wires running up the right so that they got in the way of the case closing - so all of these had to be very carefully cut out and moved. Near the end I remembered I had to add a 10K resistor between reset and power so that the MCU would actually come out of reset and execute code! I also had to add an input signal, to software switch the Y axis inputs to the X axis grey code, because the only header I could find to fit the mouse cable plug for testing didn't leave room to test both X and Y grey codes! Hence what looks like an extra button!

Finally I connected it all up, glued the board and wires down (after correcting the locations) and got: A BIG FAT NOTHING! I thought maybe I'd messed up the analogue ranges and retried it with a different range and that didn't work. Then I realised I could output debug serial out of one of the grey code outputs to the Arduino and see what the ATTINY was reading! Bit-banged serial code can be very compact!

void PutCh(uint8_t ch)

{ // (ch<<1) adds a start bit in bit 0 and |0x200 adds a

    uint16_t frame=((uint16_t)ch<<1)|0x200; // stop bit.

    do {

        if(frame&1) {

            PORTB|=kSerTxPin;

        }

        else {

            PORTB&=~kSerTxPin;

        }

        frame>>=1;

        _delay_us(52.083333-11/8); // 19200 baud.

    }while(frame); // & after the stop's been shifted out,

   // the frame is 0 and we're done.

}


Before I tried it, I did a sanity check for power and ground only to find I hadn't actually connected up VCC properly!!!! I'd forgotten the final stage of solder-bridging the decoupling cap's +ve lead to Pin1 of the AVR and I ended up ungluing everything so I could see underneath.

But when I fixed this and packed it all back in the case: It Worked! I tried reading the digital outputs at the end of the cable on the Arduino and when I was satisfied (which only took a few minutes of testing) I decided to hook it up to my A3020 and hey-presto! I have a working QE Mouse!


I had been a bit concerned that doing analog sampling wouldn't be fast enough and so my algorithm has a heuristic whereby if it sees a jump of 2 transitions (00 <=> 11 or 01 <=> 10) it assumes it's a single step in the same direction as before. I could manage about 20K x 4 samples per second, but a little maths shows this will be fine, because a full 640 pixels on the Arc's screen can be sampled OK if you cover it in about 3.2ms and clearly we don't do that.

!Paint, which is like, 96kB I think! It's terribly unintuitive though! Note on the video that a 12MHz ARM250 can drag whole, 4-bpp x 640x480 windows around rather than just outlines! Note also the Taskbar, which was standard on the Archimedes before Windows 95, or Mac OS X (NeXT step had a floating Dock).

Here, again is the link to the project and source code.



Sunday 24 March 2024

Colour Me Stupid - An Early Archimedes 8-bit Colour Quest!

I'm assuming, naïvely again, that I can write a short blog post, but on past performance this isn't likely. I recently managed to get my Acorn Archimedes A3020 working again by converting a PS/2 mouse to a Quadrature mouse as early Archimedes' expect (see this blog post) and this has given me more of an interest in the system, particularly from a hardware viewpoint.

Wonderfully, I'm currently writing this on my MacBook Air M2, a 32-year later descendent of the ARM250 that powers that original A3020, so things have come full circle in a sense.

These early Arcs had a video chip called VIDC which supports 1, 2, 4 and 8-bit colour video modes at different resolutions, but for decades (probably even since the first review I saw in Personal Computer World August 1987), I was confused as to why the 8-bit colour mode was described as merely having a 64 colour palette with 4 tints instead of proper 8-bit RGB colours.


Why create something complex, which doesn't even do the job? What kind of terminology is 'tints'? How do you really use them?

The confusion deepened when I started to look into the VIDC hardware, because it never supported a 64-colour palette and some tints, instead it always supported a 16 x 12-bit colour palette for all modes, including the 8-bit colour mode. So, how did that work?

Standard VIDC Palette Entry

Sup Blue Green Red
S B3 B2 B1 B0 G3 G2 G1 G0 R3 R2 R1 R0

The standard VIDC palette entry contains 4-bits for each component, Blue, Green and Red, oddly in that order than the conventional Red, Green, Blue order. In addition, it has a sort-of single alpha bit which can be used for GenLocking. There are just 16 palette entries, so any one of them can be selected in 4-bits per pixel modes, but fewer of them are used in 1-bit and 2-bits per pixel video modes.

In 8-bit colour mode each 8-bit pixel is composed of a 4-bit palette entry and 4-bits which replace the four palette entry bits in bold above.

Direct Palette
B3 G3 G2 R3 Palette 0..15

Although I've heard several claims that these ARM computers couldn't do proper 8-bit colour RGB, with 3 bits for Red, 3 bits for Green and 2 bits for Blue (human vision is less sensitive to blue). In fact we can immediately see that by defining the 16 palette entries so that they literally provide the other bits, we will get the equivalent of RGB332 (really BGR233). This gives us:

DirectPalette
B3 G3 G2 R3B2 G1 R2 R1

Now we have 3 bits for Green and Red, and 2 bits for Blue. This means that in theory we have a proper 8-bit RGB mode; where we can freely select any one of the full range of 256 colours such a mode could describe. Note, we don't have a palette of 64 colours + 4 tints, we have a palette of 16 colours + 16 tints each and the palette can be assigned to provide the missing RGB bits.

How To Mess Up Palette Settings

A simplistic implementation of this would be to set all the remaining bits of the palette entries to 0, i.e. B1, B0, G0 and R0. This gives us the following palette:

B2G1\R2R1 R2R1=0 R2R1=2 R2R1=4 R2R1=6
B2G1=00 0x000 0x002 0x004 0x006
B2G1=01 0x020 0x022 0x024 0x026
B2G1=10 0x400 0x402 0x404 0x406
B2G1=11 0x420 0x422 0x424 0x426

To re-emphasise, by itself this would give a very dull palette, because B3, G3, G2 and R3 are never set, but as described earlier, these bits are provided directly by the upper 4 bits of each 8-bit pixel. Consider three pixels: 0x03, 0x63 and 0x83. They all use the palette entry 3, which provides a medium-level red 0x006, but the second pixel would add a green component of 0xc making the colour: 0x0c6 and the third would add a blue component of 0x8 making the colour 0x806 (purple-ish). Combining the palette entries and modifiers then gives this 256 colour range:


Here, the colour range has been generated by a BBC Basic program running on an Arculator, an excellent Archimedes Javascript emulator. It looks pretty decent. It's shown in two formats, a literal RrrGGgBb view is on the left where each column represents the bottom 2 bits for green and both bits for blue while subsequent rows increment the top bit for green and the red component. However, it's easier to block out by splitting the full range into 4 quadrants of RrrGGg where each quadrant increments Bb. I also tried the same program on my real A3020, but this time in mode 28 as my monitor couldn't really cope with the non-VGA mode, mode 13 and got this:


I made a programming typo with the linear representation, but the quadrant version looks better than the emulator! This shows that the palettes really can generate a reasonable RGB332 colour range.

There is a minor problem in that the colours don't span the full colour range. Red and Green can only go to 87.5% of the maximum (0xe) and blue can only go to 75% of the maximum (0xc). The conventional way to address this is to replicate bits in the component, so the 8 proper levels for red and green would be: 0b0000, 0b0010, 0b0100, 0b0110, 0b1001, 0b1011, 0b1101, 0b1111. And for blue it'd be: 0b0000, 0b0101, 0b1010, 0b1111. And if the Arc had a full 256 colour palette, like every colour Mac from the Mac II onwards had, that's exactly what's done. Unfortunately, if you try to approximate this, you get a worse arrangement:


There are two obvious problem: on the right-hand spectrum, I've outlined how the Red=3 and Red=4 values look almost the same (as as do the Blue=1 and Blue=2, outlined on the left). This is because the difference is only 1 in each case: Red=3 translates as 0x7 (from the palette), and Red=4 translates as 0x8 (from R3); while Blue=1 translates as 0x7 (from the palette) and Blue=2 translates as 0x8 (from B3).

It turns out, then that the naïve palette assignment is the most evenly distributed one. And this brings us to the next observation:

A Hint On Tints: Exact 8-bit RGB Is Impossible

On a real 8-bit RGB palette, the blue values 00 to 11 scale into the full range of blue: 0x00 to 0xff, matching the same range as green and red where 000 to 111 scale to 0x00 to 0xff. However, the Archimedes palette (as alluded to earlier) scales unevenly: blue scales to 0xc0 while red and green scale to 0xe0. Also since neither scale to 0xff, the colours will be more dull.

Instead what we get is an effective 64-entry palette where we only consider the top two bits of each component: BbGGxRry + 4 green/red tints for each one: xy = 00, 01, 10, 11. And this explains why the Archimedes manual always describes 8-bit colours in those terms, but the choice of their default tints is different.

One of the other major problems with this palette is that you can only have 4 proper BGR233 greys: 0x00, 0x52, 0xa4 and 0xf6. The slightly brighter colours: 0xf7, 0xfe and 0xff are off-white, pink, green and yellow tints. Ironically, proper RGB332 can only manage two greys! Consider RGB332 represented by 256 x 12-bit BGR palette entries or 256 x 24-bit BGR palette entries. Black is RGB332=0x00 which maps to 0x000 in the 12-bit palette and {0x00, 0x00, 0x00}. The next closest is Red=Green=2, Blue=1. This is 0x544 in the 12-bit palette and {0x55, 0x49, 0x49} in the 24-bit palette - both slightly blue. Then Red=Green=4, Blue=2, which is 0xa99 in the 12-bit palette and {0xaa, 0x92, 0x92} in the 24-bit palette, again, both slightly blue; then Red=Green=7, Blue=3 which is 0xfff in the 12-bit palette and {0xff, 0xff, 0xff} in the 24-bit palette.

In both of those cases, even the 24-bit palette is 16% out from a grey which is distinguishable to the human eye.

The Alternative Palette Approach

The conventional VIDC 8-bit palette with 64 base colours and 4 tints I now understand will look something more like this:

Direct Palette
B3 G3 G2 R3 B2 R2 T1 T0

Where T1 and T0 represents the tints of white, which get added to all of B1B0, G1G0 and R1R0. This kind of palette would contain these entries:

B2R2\Tint 0 1 2 3
B2R2=00 0x000 0x111 0x222 0x333
B2R2=01 0x004 0x115 0x226 0x337
B2R2=10 0x400 0x511 0x622 0x733
B2R2=11 0x404 0x515 0x626 0x737

This time we essentially have 4 dimensions to consider: Blue<2>, Green<2>, Red<2> and Tint<2>, I thought a recursive arrangement would be clearest, but it turns out that a semi-linear arrangement is:


Here, 4 horizontally adjacent blocks are the tints and then each horizontal block of 4 is the red component; while 4 vertically adjacent blocks are the blue component and each vertical block of 4 is green.

Trying to process images in this convention is challenging, because (as we'll see in the next section) it's hard to calculate how to dither each component, because they aren't truly independent and can't be, because there are really only 3 primaries (Red, Green, Blue), but four components. For example, it's easy to see that the top left and bottom right tint groups are actual greys, but harder to see that there's two other grey blocks (which I've outlined). This means we can't independently adjust tints against RGB.

Nevertheless, this convention has a several advantages over RGB332:
  • The RGB primaries are all evenly distributed, they get 2-bits each.
  • There are 16 levels of grey, which means that anti-aliased black text on a white background (or its inverse) can be done pretty well.
  • There's a brighter range because it goes up to 0xfff.
  • Human vision is more attuned to brightness than colour, which is why the HSV colour model is effective, so it's possible that this convention can represent images that perceive better to us.

Dithering

Even though we only have 256, imperfect base colours in our quasi-BGR332 palette, we can employ a standard technique to mask this, called dithering. I wanted to generate the inner surface of an RGB cube (where black is the bottom, left, front corner and white is the top, right, back corner) to show how we can generate fairly smooth colour transitions, by applying a fairly standard Floyd-Steinberg dither.

There's a really good blog post on dithering here, which covers far more forms of dithering that the ordered and FS dithering I knew about. FS dithering works by scanning the original image in raster order and then computing the error from the closest colour we can generate and then propagating the error to the immediate pixels on the right and below (or right and above).

* 7/16
3/16 5/16 1/16

In fact we can compute all of this by just maintaining a full-colour error array for a single row + the single pixel to the right.

3D Projection

So, dithering is fairly simple, but the 3D projection was actually fairly complex, because I couldn't just draw an image in normal 3D space, I had to scan the image; translating row and column pixels into (x,y,z) coordinates for the cube, culling pixels outside the cube and then calculating the furthest pixel at each of these points. Then x corresponds to red, y corresponds to green and z corresponds to blue. This involved quite a number of mistakes! To make the 3D calculations simple I generated an orthogonal projection where z=column*4/5+row*3/5; which is essentially a 3:4:5 triangle and avoids having to compute floating point maths nor square roots. The hacky calculations work as follows:

First we want to transform from (column, row) space (the screen coordinate) to (x,y,z), where y is down/up and z is depth. (0,0) is easy, it's (0,0,0) and any coordinate along the c axis is easy it's (c,0) => (c,0,0). As we go up rows, the beginning of the cube starts slightly further to the right and because of the projection, we know that (c, r)=(4,3) is also easy, it's (0,0,5). Similarly, any ratio where r=3c/4 is also (0, 0, 5c/4). When we're to the left of that axis, we're part of the left plane, so we need to draw that, and then the column calculation is the smallest, because e.g. (0, r) => (0, 0, 5r/3) > (0,0,5c/4) since 5c/4 is 0, but when we're to the right of that axis, the row calculation is the smallest. The back face is determined by the maximum depth of 255, so we simply limit the depth to that to generate it. (x,y,z) then map directly to (r, g, b).

In the end, I generated this Colour cube on the emulator:


It looks fairly smooth, but we can see some banding. Real-life images aren't smooth, so they don't tend to exhibit the same kind of artefacts.

Conclusion

Early, colour microcomputers had many compromises because of speed and memory limitations. Many of them used Palettes (Atari 8 and 16-bit, Acorn BBC and Archimedes, Macintosh II, Apple ||gs, PC EGA and VGA modes, the Amiga..) to provide a wider colour range than possible given the number of allocated bits per pixel and some used other tricks such as colour attributes which allowed a full colour range across the screen with a low colour boundary resolution (e.g. VIC-20, ZX Spectrum, Commodore 64, also Amiga HAM mode). As frame buffer memory approached 64kB or above, during the late 80s, it became possible to provide passable 8-bit true colour video in home computers. The early colour Macintosh II, PC MCGA and Archimedes computers fall into this category. They all use palettes, except that the Mac II and MCGA mode have 256 entries, each of 24-bits (or 18-bits in the case of MCGA).

The Archimedes A3020 inherits its graphics from its Atari ST / Commodore Amiga era incarnation with a limited 16 entry x 12-bit palette and a cheap hack to support a 'true' colour mode. The alternative, a proper 256 entry palette would have required a relatively costly 384 bytes of Palette RAM (+18K transistors) or a late chip redesign and a later or more expensive release for the integrated ARM250 chip[1].

Acorn's tendency to be technically pedantic I think, is what lead them to claim this mode is really 64-colours + 4 tints rather than a decent approximation to RGB332 from 16-base colours + 16 palette entries. RGBT2222 has some advantages, but RGB332 (really BGR233) makes most sense as a colour range, because all the others lead to either greater banding or a less coherent relationship between pixel bits and primary components. It turns out that it's possible to achieve a reasonable approximation to RGB332 on an Archimedes.

Notes [1]: ARM250 die from the Microprocessor Report linked earlier. The ARM CPU itself requires 29K transistors, so adding 18K transistors to VIDC would have resulted in a notable increase in size and cost for the 100K transistor, $25 chip.




Monday 22 January 2024

Starring The Computer: Holly's Psion MC400 In Die Hard 2

Well Die Hard 2 fans! This is the moment you’ve been waiting for… What laptop was Holly Gennaro-McClaine using on the plane???

It turns out the answer is A Psion MC400: Ie an early 1990s notebook computer! There aren't many shots of the computer in the movie and the only purpose they seem to serve is to contrast Holly's pro-technology attitude with John's anti-technology attitude. We'll start with the last shot where she packs it away, at about 1:16h into the film:


It's quite clearly the same computer, you can see the PSION insignia at the top left; the ribbing going through the centre, the model name on the metal band on the bottom.

There's more to the Psion MC400 than the outside. At 2:20 you get a fairly decent shot of the keyboard. I enlarged it so you can see it against a stock image.




It's really satisfying to do this, for three main reasons. The most important is that I've worked it out and I can't find any other reference to it on the net (I've tried googling "Die Hard 2" and "Psion MC400"; and I've tried looking at the excellent website: "Starring the Computer" and it doesn't appear in either).

The next reason is that it was only when me and my wife saw the movie at the beginning of January 2024 on BBC iPlayer that it occurred to me that I could find out what computer it was and given that even in the late 80s and early 1990s, there were probably a number of laptop style computers it could be, tracking it down from the few clips we have of it seemed like a tantalising, but challenging exercise. The weird thing was that when I looked at the computer, I felt I’d seen that computer before, and it wasn’t long before I guessed it might be a Psion MC series, because of the light keys against a dark casing (all PC laptops of the time were either too big to fit on an airplane seat (eg Toshiba T1000) or had a beige case, or both). So, really, the first computer I tried to check it against was the Psion MC400.

The third reason is to do with the nature of the MC400 itself, because it was a remarkably advanced computer from a British company and also very unusual, partly because it wasn't very successful. Seeing a British laptop in a US film is particularly remarkable.

You can see how advanced the MC400 looks in comparison with e.g. a Toshiba 1200 from 1989 (when the movie was shot, assuming post-production took 6 months). Can you imagine Holly lifting it with one hand to put in a bag or it even fitting on the tray table for her seat?


And this tells you why the MC400 was the perfect laptop for demonstrating hi-tech Holly in the movie, because the computer can be handled easily (only about 2kg) and fits nicely on the tray shelf; something that isn't possible for any other laptop machine. I have to be careful with the phrasing here, because there were palmtops at around the same time and also some awkward notebooks like the Epson PX-8, but they were quite clearly not like a modern laptop.

To illustrate why it was so progressive: One of the cleverest things about the MC series is the trackpad, which is above the keyboard. As I try to remember from v brief the time I used one; it used absolute positioning (top-left, bottom-right = top-left, bottom-right on the screen) and accurate placement involved rolling your finger on the pad. At the time, laptops either had no mouse/trackpad/trackball (because PCs didn't have Windows); or you'd plug in a serial mouse (like most people still do with a PC laptop) or in a few cases and maybe this is a few years later, you could get a weird mini trackball that plugged into the side of the keyboard.

After the failure of the MC series Psion re-used its multi-tasking architecture and OS (Epoc) in a new product, the first (& incredibly successful) useful PDA, the Psion series 3 🙂 !



You can see how it shares some of the design language of the MC400 series, with the ribbed casing silver logo followed by the dark model name banner. In addition the keys have a similar colour scheme. The big difference is the screen size (240x80 pixels vs 640 x 400) and built-in apps.

It was only the arrival of the PowerBook 100 in 1991 when the trackball/trackpad was moved to the front (and Apple [nor anyone else] used a trackpad for at least another 5 years, which, again shows how advanced the MC400 was).

After the PB100 appeared, all laptops were set out this way (apart from IBM's flirtation with the ThinkPad  'nipple'):


The MC400 (and Psion Series 3) had one last trick up their sleeves, they were based around a pure solid state storage model, involving SRAM and serial Flash cartridges that plugged into the unit rather like modern USB memory sticks, except that they would fully insert.




Ground-breaking machines!

Friday 29 December 2023

The Humble Microfloppy Disk: A Vehicle of Insidious Cultural Imperialism

I think this is the longest title I've had for a blog post!

And yet the post should be relatively short.

I came across this video about the history of the microfloppy disk, the 720kB / 800kB, 1.4MB removable disk format that lives on in the shape of the Save Icon and the classic (but only marginally funny) joke about a kid thinking that one is a 3-D print of the Save Icon.


[https://youtu.be/djsyVgTGaRk?si=Kd0Z1nrqXfmUG15c]

It's an intriguing history, mostly because there was a fairly rapid transition from 8" floppy disks to 5.25" floppy disks in the 1970s, but then, despite Sony's microfloppy arriving in at the very beginning of the 1980s, and being so superior, it took about 5 to 7 years before it started to dominate (hint: the IBM PC standard held it back).

But one fact really blew my mind: it turns out the 3.5" microfloppy doesn't exist. Let's say that again - the 3.5" microfloppy doesn't exist.

In reality it's 9cm, not 3.5". I've used them since the mid 1980s and in all those 40 years, I've never known this - I was duped by some Cultural Imperialism!

In retrospect, it should be pretty obvious that the 3.5" microfloppy is unlikely to have a specification in inches, simply because it was made by Sony, a Japanese company. Japan uses metric. CDs, for example are 12cm - they were designed in Europe and Japan. 3.5 inches is 8.89cm, making it just over 1mm less than the correct size for a microfloppy disk, but that 1mm matters.

We can prove this to ourselves by measuring it (which I did) and then taking a photo. The trick though is to compensate for the parallax, since if you're looking at the disk from the centre, then the width could indeed look about 1mm shorter depending on the thickness of the ruler you use. In this photo, I did it by using a panoramic shot. That way I can measure 0cm (actually 20cm) directly above the left-hand side of the disk and 9cm (actually 29cm) directly above the right-hand side of the disk and you can see that I didn't move the ruler, or cheat by some other mechanism (though vertically, you can see it isn't straight).



Why is cultural imperialism important? The answer is that metric versus imperial measurements is a practical issue, blocked by political games. Namely, metric measurements are objectively better, but many people in power have an agenda to maintain historical measurement systems.

Why would they do that? The reason is because Imperial measurements are more complex and that makes it easier to manipulate people, to pull the wool over their eyes. And this happens because different types of units aren't easily comparable (e.g. weight, mass, volume, lengths and time) and different scales for the same kind of unit use different bases (e.g. 12 inches per foot, 3 feet per yard, and almost no-one knows how many yards there are in a mile).


This presents a barrier of understanding which reduces people's ability to process units down to just comparing values from the same kind of unit. It has an actual impact on maths attainment in the UK [@todo MetricViewsLink].

For example, someone sells 7oz of cherries for 2 crowns and 1lb of cherries for £1.5s.6d. Which is better value? To know that you need to know there are 16 ounces in a pound; 5 shillings in a crown; 20 shillings in a pound (money) and 12 pennies in a shilling. Then you convert everything into ounces and shillings (or maybe pennies) leading to 7oz for 10 shillings and 16oz for 25.5 shillings. Now you know that the 7oz price is cheaper (just).

That's how it was in the UK before February 1971 when we switched from £sd on decimal day. It took well over a century, from the mid-1800s to the mid-1960s before the UK finally managed to agree. At the time, people were worried that decimalisation would cause traders to con customers, yet they never considered that it was much easier to con people using £sd money.

Nobody alive in the UK would consider shifting back to that awful system, yet we, who are generally in favour of metric measurements are quite happy to let Imperialists force us to use non-metric units. And, that's because there is effectively a deliberate attempt by them to switch everyone back: they convert metric to imperial units and then delete references to the metric units, and when questioned they appeal to ‘patriotism’ or your compassion for their stubbornness.

A case in point is the 2022 UK government consultation on Imperial measurements, billed as allowing us to use imperial measurements. But it was a lie, since we can already use imperial measurements in the UK, we just have to include metric measurements and make them at least as prominent. What the government wanted to do instead was to be able to omit metric measurements; and to further that aim, they rigged the consultation so that it wasn't possible to let the government know you preferred metric. All the questions were along the lines of “Do you want things to remain as they are, or allow metric to be omitted?” Therefore the balance of responses had to tilt in favour of eliminating metric.

In the end, over 100,000 responses were submitted and respondents, including myself found ways of being able to assert their preference for metric (via the occasional "other comments" boxes). Because the consultation didn't go the way the government wanted, they didn't publish the findings within the 12 week period they promised, but waited a year.

We found out the results on December 27th. Over 98.7% as clearly as possible said they preferred the current rules or only metric, so the government... introduced imperial measurements to bottles of wine “as a first step” towards more Imperialism, which no-one wanted, supermarkets already are saying they won't sell and are impossible to sell on a global market either.

It's all covered in the pro-metric UK society, metric views.uk. How to respond to the survey; mistakes & bias in the consultationhow the survey could have been fixedgovernment ignores complaints about the surveywhy no response after a year; and finally government confirms 99% don't want more Imperialism.

In conclusion, imperial measurements are embarrassing in the 21st century, but coercion is being used to perpetuate them. What we need is #MoreMetric.



Thursday 28 December 2023

Dialog Disillusion - The Mac Programming Primer Let Me Down

 Introduction

We did a bit of Macintosh programming at UEA, my undergraduate University between 1986 and 1989. Here we mostly used the interpreted MacPascal and a couple of sheets of the list of ToolBox APIs. We had infrequent access to MPW Pascal on the Mac IIs in the 3rd year, but the vast majority of development was done on Mac 512Ks and Mac Plusses.

This meant that we didn't really learn Macintosh programming properly. That's partly because MacPascal didn't support it properly (it used wacko inline functions to access the ToolBox), partly because we didn't get enough time on the Macs and partly because we just didn't have enough usable documentation.

So, when I found a copy of The Macintosh Pascal Programming Primer in about 1993 when I finally had a Mac (a Performa 400), I was overjoyed! I followed the entire set of examples from beginning to end and found them really educational: a whole bunch of well-written example applications that covered most of the needs of Toolbox API-based programs. The only difference was that I was using THINK C 5.0.4 instead of THINK Pascal, but it was easy to translate.

I used this knowledge to write a 16-bit RISC CPU simulator that gained me access to an MPhil degree in Computer Architecture at the University of Manchester between 1996 and 1998.

The Problem

Recently I've wanted to write a simple simulation framework for the classic Mac OS that consists of a dialog box to enter parameters and a main window to run the simulation. I probably want to integrate the dialog box with the main window and allow it to be updated live, but to start with I thought it would be easier to use a modal dialog, so that the user interaction would be:
  1. Edit the parameters
  2. Start/Restart the simulation
  3. Maybe stop the simulation before it ends
  4. Go back to steps 1 or 2 or Quit.
I started by taking the demo THINK C program: oopsBullseye and then adding the Dialog handling code from the Macintosh Pascal Programming Primer. But it didn't work - it just crashed the Mac every time (actually just a miniVMac emulator, but it's still a crash).

I wondered what kind of mistake I'd made, so I went back to the original Dialog chapter (chapter 6) and followed it through. Lo-and-behold, it worked. I still couldn't see where I'd gone wrong, but I thought it was because on my version, I could see that the dialog box appeared behind the main window, and that seemed to hang it. So I modified the Dialog demo to make it more like my application: the window would be open all the time (not just when the countdown was happening) and countdowns could be interactively started or restarted. I had to support update events.

And then I found out that this modified dialog application didn't work any more either! It had the same problem, the dialog box appeared behind the main window and crashed when ModalDialog was called. I scoured my copy of Inside Macintosh and Macintosh Toolbox essentials (I have a paper copy of both) and found some example Dialog box code for modal dialogs, but it still wasn't obvious what the difference was.

It turns out that the Macintosh Pascal Programming Primer is doing Dialog boxes really badly! I was gutted! Their example uses a couple of poor idioms which would mislead other programmers and it makes all the difference.

Analysis

TMPPP does two basic things that are wrong.

Firstly, it creates a global dialog box in DialogInit() (by using GetNewDialog(..) to read it in from the application's resource) which then sits there in memory all the time as a window you can't see. This means that when any other windows are created, the dialog box will pop up behind them when ShowWindow(dialogPtr) is called and then ModalDialog(..) will crash (the Mac!).

What it should do is create the dialog box when needed using GetNewDialog(..), i.e. in the Dialog Box handler, and when the user has finished with it, dispose of the dialog box (DisposeDialog()). Then the operation of the modal dialog is handled all in one place, and the Mac can deallocate the dialog box memory when it isn't needed, which is what we want.

Secondly, it violates the standard model / view / controller paradigm. Here, essentially, the model is the parameters used the dialog box. But in their example, they store the actual parameters in the dialog box items themselves; then when the dialog handler is called, they're saved to an internal data structure; and only if the user presses cancel, the internal data structure is used to restore back to the dialog box itself.

It should be done the other way around, the model is the internal data structure. When calling the Dialog box handler, the parameters should get copied to the dialog box items (which is equivalent to RestoreSettings(..)); and when the user quits by clicking [Save], the new items' values are copied back to the internal data structure, which is the SaveSettings(..) option ([Cancel] doesn't do anything, it just quits dialog box operations without updating the internal data structure).

The New Dialog Box Demo

So, my new Dialog Box demo is included here. It's significantly shorter, <500 lines; doesn't use the Notification manager, but importantly does use the modal dialog box the way it's supposed to. I avoid most of the repeated copying of lines of code by factoring the code that sets and gets controls: I think that this is going to be just as easy for new programmers to understand, because they won't have to scan a whole set of very similar lines of code to understand what each set of lines is doing: they can just go back to the lower-level getter/setter code and when they do their own dialog boxes, they'll be more likely to factor it too.

The resources are all almost exactly the same. I removed a menu option, because it no longer applied. You don't need the SICN icon.


Re-source¹ Fields.. [Close Window] Info².. [Close, Close]
DITL [OK]  
(see image below. Start with the Save, then Cancel buttons, then the other fields)

ID=400, Name="Alarm", Purgeable (only)
DITL [OK]
(see image below. Start with the OK button, then the text field)

ID=401, Name="About", Purgeable (only)
ALRT [OK] TOP=40, Bottom=142, Left=40, Right=332, DITL=401, Default Color ID=401, Name="About", Purgeable (only)
DLOG [OK] TOP=40, Bottom=200, Left=60, Right=320, DITL=400, Default Color, Standard double-border Dialog style, Not Initially visible, No close box ID=400, Name="Alarm", Purgeable (only)
MENU [OK] [X] Enabled, Title=• Apple Menu[ENTER], [X] Enabled, Title="About..."[ENTER] [ ]Enabled, • Separator line ID=400, No attributes.
MENU [OK] [X] Enabled, Title="File"[ENTER], [X] Enabled, Title="Settings..", Cmd-Key:S[ENTER], Title="Run", Cmd-Key:R[ENTER], Title="Quit", Cmd-Key:Q[ENTER] ID=401, No attributes.
MENU [OK] [X] Enabled, Title="Edit"[ENTER], no options [ ] Enabled: Title="Undo", Cmd-Key:Z[ENTER], Separator Line[Enter], Title="Cut", Cmd-Key:X[ENTER], Title="Copy", Cmd-Key:C[ENTER], Title="Paste", Cmd-Key:V[ENTER], Title="Clear", Cmd-Key:none[ENTER] ID=402, No attributes.
MBAR [OK] Each time, click in '****', choose Resource:Insert New Field(s) for Menu Res IDs 400, 401, 402. Top should say "# of menus 3 at the end." ID=400, No attributes.
WIND [OK] Close, then choose Resource:Open Using Template [WIND] [OK]. Bounds Rect= 70, 36, 106, 156 [Set], Proc ID=0, Visible=false, GoAway=false, RefCon=0, Title="Countdown", Auto Position=$0000. ID=400, Name="Countdown", Purgeable (only)

Parameters Ditl


About Ditl




When you've finished, close the .rsrc file. ResEdit will ask you to save it - save it. Then open up the Dlog.Ï€ project.  Choose File:New and create a stub of a C program:

int main(void)
{
    return 0;
}

Choose File:Save to save it as Dlog.c. Choose Project:Add "Dlog.c" to add the file to the project. You don't need to do anything clever to add the rsrc file to the project, THINK C will automatically associate the .rsrc with the same prefix as your application. 

Now you want to replace the dummy program with the rest of file. When you've finished...

Dlog.h

/**
* @file: Reminder.h
*/

#ifndef Reminder_h
#define Reminder_h

#define kBaseResId 400
#define kAboutAlert 401
#define kBadSysAlert 402

#define kSleep 60

#define kSaveButton 1
#define kCancelButton 2
#define kTimeField 4
#define kSOrMField 5
#define kSoundOnBox 6
#define kIconOnBox 7
#define kAlertOnBox 8
#define kSecsRadio 10
#define kMinsRadio 11

#define kDefaultSecsId 401
#define kDefaultMinsId 402

#define kOff 0
#define kOn 1

#define kSecondsPerMinute 60

#define kTop 25
#define kLeft 12

#define kMarkApplication 1
#define kAppleMenuId (kBaseResId)
#define kFileMenuId (kBaseResId+1)
#define kAboutItem 1

#define kChangeItem 1
#define kStartStopItem 2
#define kQuitItem 3

#define kSysVersion 2

typedef enum{
  kBoolFalse=0,
  kBoolTrue=1
}tBool;

typedef enum {
  kTimeUnitSeconds=0,
  kTimeUnitMinutes=1
}tTimeUnit;

typedef struct {
  long iTime;
  int iSound, iIcon, iAlert;
  tTimeUnit iUnit;
}tSettings;



extern Handle DlogItemGet(DialogPtr aDialog, int aItem);
extern void CtlSet(DialogPtr aDialog, int aItem, int aValue);
extern int CtlGet(DialogPtr aDialog, int aItem);
extern void CtlFlip(DialogPtr aDialog, int aItem);
extern void ITextSet(DialogPtr aDialog, int aItem, Str255 *aStr);

extern void StartCountDown(long aNumSecs);
extern void HandleCountDown(void);
extern void UpdateCountDown(void);

extern void RestoreSettings(DialogPtr aSettingsDialog);
extern void SaveSettings(DialogPtr aSettingsDialog);
extern void HandleDialog(void);
extern void HandleFileChoice(int aTheItem);
extern void HandleAppleChoice(int aTheItem);
extern void HandleMenuChoice(long aMenuChoice);
extern void HandleMouseDown(void);
extern void HandleEvent(void);
extern void MainLoop(void);
extern void MenuBarInit(void);
extern void DialogInit(void);
extern void WinInit(void);
extern tBool Sys6OrLater(void);
extern void ToolboxInit(void);
extern int main(void);

#endif // Reminder_h

Dlog.c

/**
 * Dlog.c
 */

#include "Dlog.h"

tBool gDone;

EventRecord gTheEvent;
tSettings gSavedSettings;


WindowPtr gCountDownWindow;
long gTimeout, gOldTime;
tBool gIsCounting;

Handle DlogItemGet(DialogPtr aDialog, int aItem)
{
  int itemType;
  Rect itemRect;
  Handle itemHandle;
  GetDItem(aDialog, aItem, &itemType, &itemHandle, &itemRect);
  return itemHandle;
}

void CtlSet(DialogPtr aDialog, int aItem, int aValue)
{
  Handle itemHandle=DlogItemGet(aDialog, aItem);
  SetCtlValue((ControlHandle)itemHandle, aValue);
}

int CtlGet(DialogPtr aDialog, int aItem)
{
  Handle itemHandle=DlogItemGet(aDialog, aItem);
  return GetCtlValue((ControlHandle)itemHandle);
}

/*
void ITextSet(DialogPtr aDialog, int aItem, Str255 *aStr)
{
  Handle itemHandle=DlogItemGet(aDialog, aItem);
  SetIText(itemHandle, aStr);
}
*/
void CtlFlip(DialogPtr aDialog, int aItem)
{
  Handle itemHandle=DlogItemGet(aDialog, aItem);
  SetCtlValue((ControlHandle)itemHandle,
    (GetCtlValue((ControlHandle)itemHandle)==kOn)? kOff:kOn);
}

void StartCountDown(long aNumSecs)
{
  GetDateTime(&gOldTime);
  if(gSavedSettings.iUnit==kTimeUnitMinutes) {
    aNumSecs*=kSecondsPerMinute;
  }
  gTimeout=gOldTime+aNumSecs; // this is the timeout.
  gIsCounting=kBoolTrue;
}

// Called on Null event.
void HandleCountDown(void)
{
  if(gIsCounting==kBoolTrue) {
    long myTime;
    GetDateTime(&myTime);
    if(myTime!=gOldTime) {
      GrafPtr oldPort;
      gOldTime=myTime; // gTimeout-gOldTime==remaining seconds.
      // gen update, but how?
      GetPort(&oldPort);
      SetPort((GrafPtr)gCountDownWindow);
      InvalRect(&gCountDownWindow->portRect);
      SetPort(oldPort);
    }
  }
}

void UpdateCountDown(void)
{
  //
  WindowPtr win=(WindowPtr)gTheEvent.message;
  if(win==gCountDownWindow) {
    long remaining=gTimeout-gOldTime;
    Str255 myTimeString;
    BeginUpdate(win);
    MoveTo(kLeft, kTop);
    if(remaining<=0 || gIsCounting==kBoolFalse) {
      remaining=0;
      gIsCounting=kBoolFalse;
    }
    NumToString(remaining, myTimeString);
    EraseRect(&(gCountDownWindow->portRect));
    DrawString(myTimeString);
    EndUpdate(win);
  }
}

void RestoreSettings(DialogPtr aSettingsDialog)
{
  Handle itemHandle;
  Str255 timeString;
  tBool isInSeconds=(gSavedSettings.iUnit==kTimeUnitSeconds)?
      kBoolTrue:kBoolFalse;
 
  itemHandle=DlogItemGet(aSettingsDialog, kTimeField);
  NumToString(gSavedSettings.iTime, &timeString);
  SetIText(itemHandle, timeString);
 
  CtlSet(aSettingsDialog, kSoundOnBox, gSavedSettings.iSound);
  CtlSet(aSettingsDialog, kIconOnBox, gSavedSettings.iIcon);
  CtlSet(aSettingsDialog, kAlertOnBox, gSavedSettings.iAlert);
  CtlSet(aSettingsDialog, kSecsRadio, (isInSeconds==kBoolTrue)?kOn:kOff);
  CtlSet(aSettingsDialog, kMinsRadio, (isInSeconds==kBoolFalse)?kOn:kOff);

  itemHandle=DlogItemGet(aSettingsDialog, kSOrMField);
  SetIText(itemHandle,(gSavedSettings.iUnit==kTimeUnitSeconds)?
      "\pseconds":"\pminutes");
}

void SaveSettings(DialogPtr aSettingsDialog)
{
  Handle itemHandle;
  Str255 timeString;

  itemHandle=DlogItemGet(aSettingsDialog, kTimeField);
  GetIText(itemHandle, &timeString);
  StringToNum(timeString, &gSavedSettings.iTime);
 
  gSavedSettings.iSound=CtlGet(aSettingsDialog, kSoundOnBox);
  gSavedSettings.iIcon=CtlGet(aSettingsDialog, kIconOnBox);
  gSavedSettings.iAlert=CtlGet(aSettingsDialog, kAlertOnBox);
  gSavedSettings.iUnit=(CtlGet(aSettingsDialog, kSecsRadio)==kOn)?
        kTimeUnitSeconds:kTimeUnitMinutes;
}

void HandleDialog(void)
{
  tBool dialogDone;
  int itemHit;
  long alarmDelay;
  Handle itemHandle;
  DialogPtr settingsDialog;
 
  settingsDialog=GetNewDialog(kBaseResId, NULL, (WindowPtr)-1);

  ShowWindow(settingsDialog);
  RestoreSettings(settingsDialog);
 
  dialogDone=kBoolFalse;
  while(dialogDone==kBoolFalse) {
    ModalDialog(NULL, &itemHit);
    switch(itemHit) {
    case kSaveButton:
      SaveSettings(settingsDialog); // update them.
      dialogDone=kBoolTrue;
      break;
    case kCancelButton:
      dialogDone=kBoolTrue;
      break;
    case kSoundOnBox:
    case kIconOnBox:
    case kAlertOnBox:
      CtlFlip(settingsDialog, itemHit);
      break;
    case kSecsRadio:
      CtlSet(settingsDialog, kSecsRadio, kOn);
      CtlSet(settingsDialog, kMinsRadio, kOff);

      itemHandle=DlogItemGet(settingsDialog, kSOrMField);
      SetIText(itemHandle, "\pseconds");
      break;
    case kMinsRadio:
      CtlSet(settingsDialog, kSecsRadio, kOff);
      CtlSet(settingsDialog, kMinsRadio, kOn);

      itemHandle=DlogItemGet(settingsDialog, kSOrMField);
      SetIText(itemHandle, "\pminutes");
      break;
    }
  }
  DisposeDialog(settingsDialog);
}

void HandleFileChoice(int aTheItem)
{
  switch(aTheItem) {
  case kChangeItem:
    HandleDialog();
    break;
  case kStartStopItem:
    HiliteMenu(0);
    StartCountDown(gSavedSettings.iTime);
    break;
  case kQuitItem:
    gDone=true;
    break;
  }
}

void HandleAppleChoice(int aTheItem)
{
  Str255 accName;
  int accNumber, itemNumber, dummy;
  MenuHandle appleMenu;
  switch(aTheItem) {
  case kAboutItem:
    NoteAlert(kAboutAlert, NULL);
    break;
  default:
    appleMenu=GetMHandle(kAppleMenuId);
    GetItem(appleMenu, aTheItem, &accName);
    OpenDeskAcc(accName);
    break;
  }
}

void HandleMenuChoice(long aMenuChoice)
{
  int theMenu, theItem;
  if(aMenuChoice!=0) {
    theMenu=HiWord(aMenuChoice);
    theItem=LoWord(aMenuChoice);
    switch(theMenu) {
    case kAppleMenuId:
      HandleAppleChoice(theItem);
      break;
    case kFileMenuId:
      HandleFileChoice(theItem);
      break;
    }
    HiliteMenu(0);
  }
}

void HandleMouseDown(void)
{
  WindowPtr whichWindow;
  int thePart;
  long menuChoice, windSize;
  thePart=FindWindow(gTheEvent.where, &whichWindow);
  switch(thePart) {
  case inMenuBar:
    menuChoice=MenuSelect(gTheEvent.where);
    HandleMenuChoice(menuChoice);
    break;
  case inSysWindow:
    SystemClick(&gTheEvent, whichWindow);
    break;
  case inDrag:
    DragWindow(whichWindow, gTheEvent.where, &screenBits.bounds);
    break;
  case inGoAway:
    gDone=kBoolTrue;
    break;
  }
}

void HandleEvent(void)
{
  char theChar;
  tBool dummy;
  WaitNextEvent(everyEvent, &gTheEvent, kSleep, NULL);
  switch(gTheEvent.what){
  case mouseDown:
    HandleMouseDown();
    break;
  case keyDown: case autoKey:
    theChar=(char)(gTheEvent.message & charCodeMask);
    if((gTheEvent.modifiers & cmdKey)!=0) {
      HandleMenuChoice(MenuKey(theChar));
    }
    break;
  case nullEvent:
    HandleCountDown();
    break;
  case updateEvt:
    UpdateCountDown();
    break;
  }
}

void MainLoop(void)
{
  gDone=kBoolFalse;
  while(gDone==kBoolFalse) {
    HandleEvent();
  }
}

void MenuBarInit(void)
{
  Handle myMenuBar;
  MenuHandle aMenu;
  myMenuBar=GetNewMBar(kBaseResId);
  SetMenuBar(myMenuBar);
  DisposHandle(myMenuBar);
  aMenu=GetMHandle(kAppleMenuId);
  AddResMenu(aMenu, 'DRVR');
  DrawMenuBar();
}

void WinInit(void)
{
  gCountDownWindow=GetNewWindow(kBaseResId, NULL, (WindowPtr)-1);
  gIsCounting=kBoolFalse;
  SetPort(gCountDownWindow);
  TextFace(bold); // it's the same in THINK C.
  TextSize(24);
  ShowWindow(gCountDownWindow);
}

void DialogInit(void)
{
  gSavedSettings.iTime=12;
 
  gSavedSettings.iSound=kOn;
  gSavedSettings.iIcon=kOn;
  gSavedSettings.iAlert=kOn;
 
  gSavedSettings.iUnit=kTimeUnitSeconds;
}

tBool Sys6OrLater(void)
{
  OSErr status;
  SysEnvRec SysEnvData;
  int dummy;
  tBool result=kBoolTrue;
  status=SysEnvirons(kSysVersion, &SysEnvData);
  if(status!=noErr || SysEnvData.systemVersion<0x600) {
    StopAlert(kBadSysAlert, NULL);
    result=kBoolFalse;
  }
  return result;
}

void ToolboxInit(void)
{
  InitGraf(&thePort);
  InitFonts();
  InitWindows();
  InitMenus();
  TEInit();
  InitDialogs(NULL);
  MaxApplZone();
}

int main(void)
{
  ToolboxInit();
  if(Sys6OrLater()) {
    DialogInit();
    MenuBarInit();
    WinInit();
    InitCursor();
    MainLoop();
  }
  return 0;
}

Conclusion

As a whole, The Macintosh Pascal (and C) Programming Primer is a brilliantly simple introduction to traditional (non-OO) Macintosh programming. However, the Dialog box chapter ("Working With Dialogs") is a major exception. After a bit of sleuthing (which I should never have needed to do), I worked out the problem and rewrote a better demo (and made sure the cursor is an arrow instead of the watch icon). This means I'm closer to my goal of writing a simple simulation framework.

For the chronically lazy amongst you, feel free to download the full project from here.