Apollo Discussions > The Reality of Apollo

Saturn V Instrument Ring

<< < (2/2)

jfb:
There's a companion video on Destin's second channel that has more material with Luke abusing Linus ("those are called wires"), and it's a blast.  I had a few flashbacks when he described combing through octal dumps for a couple of weeks to troubleshoot a particular issue, only to discover that wasn't the problem after all; I've done a variation on that dance more than once. 

I also like how it quietly refutes the whole "we didn't have the technology to go to the Moon in 1969" nonsense.  You don't need 64-bit general-purpose CPUs and gigabytes of memory to guide a rocket or spacecraft.  What we had at the time was heavy, power-hungry, and labor-intensive to build, but it was adequate for the task. 

JayUtah:
In a class once I guided a discussion about what it would take to have a guidance system.  Or rather, the processor portion of it.  It was an exercise in requirements analysis.  We started off with a COTS solution, then pared it down to needing only six registers and some custom arithmetic hardware.  I honestly think those guys back then had an easier time thinking outside the box because back then there wasn't as much of a box yet.

But also with the hex dumps -- I've been there too.  My first computer stuff as an engineer was done on the IBM 370 mainframes.  You could always tell whether your program worked by the thickness of the printout.  A thin printout meant your program ran and produced results.  A thick one meant it crashed and OS/MVS (or whatever it was back then) dumped all 256 kB of core as hex.  The Michigan Terminal System, which ran on the same hardware, used EBCDIC 'a' (0x81) as a "core constant."  Before bringing your program into memory, it filled your entire memory space with that value.  And then when the inevitable dumps happened, it "cleverly" omitted those from the dump as untouched.

Grashtel:

--- Quote from: JayUtah on January 04, 2021, 01:31:43 PM ---But also with the hex dumps -- I've been there too.  My first computer stuff as an engineer was done on the IBM 370 mainframes.  You could always tell whether your program worked by the thickness of the printout.  A thin printout meant your program ran and produced results.  A thick one meant it crashed and OS/MVS (or whatever it was back then) dumped all 256 kB of core as hex.  The Michigan Terminal System, which ran on the same hardware, used EBCDIC 'a' (0x81) as a "core constant."  Before bringing your program into memory, it filled your entire memory space with that value.  And then when the inevitable dumps happened, it "cleverly" omitted those from the dump as untouched.

--- End quote ---
And I can just imagine how "helpful" that was for debugging, and the amount of cursing from that

molesworth:

--- Quote from: Grashtel on January 31, 2021, 05:23:42 PM ---
--- Quote from: JayUtah on January 04, 2021, 01:31:43 PM ---But also with the hex dumps -- I've been there too.  My first computer stuff as an engineer was done on the IBM 370 mainframes.  You could always tell whether your program worked by the thickness of the printout.  A thin printout meant your program ran and produced results.  A thick one meant it crashed and OS/MVS (or whatever it was back then) dumped all 256 kB of core as hex.  The Michigan Terminal System, which ran on the same hardware, used EBCDIC 'a' (0x81) as a "core constant."  Before bringing your program into memory, it filled your entire memory space with that value.  And then when the inevitable dumps happened, it "cleverly" omitted those from the dump as untouched.

--- End quote ---
And I can just imagine how "helpful" that was for debugging, and the amount of cursing from that

--- End quote ---
Oh, it was fun (for certain values of "fun" ;) ).  Although once you'd read through a few dumps it started to become easier and faster.  I could recognise most of the op-codes, and how certain sequences corresponded to sections of the program source (compilers weren't too smart in those days), whether an address was valid or not, common operand values etc.  It didn't take long to find the bugs in most cases.

However, having a decent debugger and IDE is a big leap forward, and I wouldn't want to go back to digging through dumps these days (although I have on occasion had to!).

JayUtah:
Speaking for myself, our programs were simpler back then too.  Or at least had less code in them.  And yes, you got good at knowing what parts of the source code corresponded to sections of opcodes.  The MTS "core constant" was a big help, because you could more quickly home in on parts of the memory image that had been manipulated.  You didn't have to worry whether you were looking at garbage left over from the last program that occupied that memory.  Of course I did a lot of work in IBM 370 assembly language (because engineers want the raw speed).  The assembler listing gave you the opcodes.  And the crash dump printout gave you some -- but not a lot -- of the things you ask modern interactive debuggers to tell you.

And yes, I'd have to agree that IDEs and interactive debuggers are game changers.   But our programs are vastly more complex these days.  And the expected turnaround is much shorter.  I'm not sure I could agree with the notion that it's just easier these days.  The tools are better, but the problems are harder.

Navigation

[0] Message Index

[*] Previous page

Go to full version