Sure, in hindsight.
I remember it being discussed at the time. Remember also, as you say, segmented memory addressing, a flash-in-the-pan technique that was very quickly superseded by flat address spaces in other, better architectures. But we got stuck with segment registers for far longer than needed simply because Intel cemented it in place. In deference to your defense, I'll point out that the AGC had a pretty bonkers memory banking scheme. For all its beauty, a lot of it too was obsolete right out of the gates (multiple puns intended). I'll quit ragging on Intel.
Remember the notion that nobody could possibly need more that 640k RAM?
Apocryphal statement. But yes, the history of computer technology is the history of people making bizarrely wrong guesses about what the future would bring. It wasn't too long ago that 15 teraFLOPS was a pretty fast computer. Almost half my senior software staff comes from the gaming industry. Those guys know how to push hardware. But they also know how to analyze the hardware and optimize for it so that they don't push
beyond the hardware.. That talent is what I wish were more prevalent in the software industry, and I think that's what the AGC programmers exemplified.
I have always regarded the 1201 and 1202 as the AGC equivalent of "I don't know".
[...]
"I don't know" is a valid answer to any question. I would rather have that then an outright crash on a lunar mission.
I had to read this a couple of times before I understood it enough to agree with it. Yes, I think an important consideration in any critical system -- however designed and built -- is not to promise (or insinuate) anything it can't deliver. So on the one hand, an automated system should never behave as if it has things well in hand when it can know it doesn't. On the other hand, it should do its best to fail gracefully. And by that I mean fall back to successively less capable modes of operation rather than stop suddenly altogether. Even sophisticated automotive controllers often have a "limp mode" that provides basic engine operation. And for PGNS there was AGS. But especially with highly qualified pilots, you don't want to err on the side of suppressing failure indications in a misguided attempt to limp along as if nothing was wrong. One can make the case that certain large airframe manufacturers need to learn that lesson anew.
The way the AGC was architected, we could discuss forever what a "crash" means, in the computer sense. But the real genius was that while 1201 and 1202 simply signaled symptoms in terms of undesirable, detectable software states, a human could make a judgment. The AGC didn't know why the Executive was overloaded, or why there were no available core sets. That level of introspection was not provided by the programmers. But Steve Bales knew. Which is to say, he knew that the consequences of not running certain periodic tasks would be an accumulation of uncorrected error, but that as long as that condition did not persist, the entire vehicle could stay within flight tolerances even though not strictly within the designated deadband. It's the equivalent of taking your eyes off the road for a minute to fiddle with the radio. It's naturally not as safe as maintaining situational awareness, but it can be tolerated briefly.
Ah. That is a personal bugbear of mine. Cross platform developers are becoming divorced from the actual hardware. The hardware matters no matter how one slices it.
Yeah, there's a lot of potential discussion to be had there, and if we had it I'd like the more professional software developers to weigh in on it. I've rarely seen reuse done well, even with the best of intentions. I've rarely seen portability done well, but I'm sure some of the open-source community could easily come up with good examples. What irks me above all are some programmers who come from a certain language culture (which shall remain nameless) who are blithely unaware that any sort of hardware exists at all. A few of these people -- a very few, thankfully -- seem to have no idea whatsoever how computers work.
That said, as ka9q points out, often the right answer is simply to throw more silicon at the problem. If $2,000 worth of additional RAM will solve someone's problem in as much time as it takes to install the SIMMs, then why would any conscientious engineering company expend ten times that much or more in programmer time and money to bring the present solution under the existing hardware umbrella? For many classes of problems in computing, there are severe limits to what can be optimized by programmers burning their neurons long into the night. I've seen talented programmers achieve factors (not just margins) of increased resource efficiency -- admittedly in originally poor code. But I've also seen expensive software improvement efforts that result in only marginal increases in performance or efficiency, sometimes at the expense of correctness in a complicated algorithm. Whatever else an algorithm is, it has to be correct.
I've found that electrical engineers take a very different approach to software than computer scientists. Historically they write only embedded software, and they don't think for a moment that they can change the hardware without also having to change the software, or that the software they write for one gadget will transfer seamlessly to some other gadget. The commercial reality of reuse and standardization is changing this, but if you want to talk just in terms of what EEs think software is, it's instructive.