Well, that was the big question, wasn't it? Would the craft remain flyable notwithstanding the warnings?
Yep. You can hear the urgency in Armstrong's voice when he asks for them to rule on the 1202.
One interesting way to achieve this is called fuzzing. You just throw random garbage at the program.
We fuzz our software extensively. The joke goes that the testing department for a bar tests for patrons asking for 1 beer, -1 beer, 9999999 beers, 0 beers, and "dog" beers. Then the whole thing blows up when someone asks to use the restroom.
In a sense, the Apollo simulation supervisors were fuzzing the controller/astronaut system.
Dunno... If I were going to fuzz the Net-1/MOCR setup, I'd have a mariachi band suddenly appear right at DOI.
You know, I could probably teach a course based entirely on NTSB reports and what they reveal about human, engineering and system failures.
I've taken such courses, based largely on those kinds of sources. There are also a couple of good books written by sociologists who study how critical decision-makers work in technical environments.
But they probably did already know during the landing that 16 68 was pretty compute intensive...
Except that I don't think it is.
Updating noun 68 is intensive, and it happens anyway as part of the landing tasks. I don't think displaying it is, even if it's as often as once per second. What I gather from the analysis is that it was just long enough. If you're running at 99% capacity and you add an extra 2%, the nonlinear response is what gets you. You don't get a 1202 at 99% but you get one at 101%.
...and turning it off would relieve the load. It was a good call.
Yes. If the recommendation based on initial analysis is that the crew has to reduce the load on the Executive, then any real-time tasks that can be eliminated with should be. But the snippets we hear on the FD loop have them speculating what's so special about noun 68. This would have been a wrong direction to go, but as you say, they had no urgency beyond stabilizing the current thing.
A similar situation happened during the fatal
Columbia re-entry. As temperature sensors and other sensors started going offline, the flight controllers were looking for systemic commonalities. It wasn't until much later in the troubleshooting process that they realized all those sensors were going offline because they were being destroyed -- the commonality was that they were in the rapidly heating part of the orbiter.
Yeah, and this is why I teach my students about fault trees and how they help you avoid jumping to conclusions. But I think you're still a little hard on the Apollo 11 flight controllers. They succeeded, didn't they?
They did, and in the final analysis that's all that matters. I merely bring it up as an example of
de minimis thinking. The saving grace is that
de minimis remedies aren't doomed to immediate failure. Also, in their further defense, there is a theory of operating complex systems that says you apply only the minimal effective remedy. You don't fix more than what the data say are broken.
Yeah. So you have to test for the worst case with every possible task running to make sure you have enough spare cycles. If you don't, you have to carefully plan which tasks are allowed to run.
And I gather MIT generally took the latter approach. The specs for the computer had to be locked down at a certain point, but afterwards people started realizing what a useful gadget the computer was and gave it more and more tasks to do.