Weird, because when I Iook at the photos carefully using software that NASA uses (Photoshop, not GIMP)...
In practice, both. The NASA centers I have personal knowledge of are chiefly Apple shops. And they do use Adobe products, since they are a
de facto standard. But there was also a pretty big investment in Gentoo Linux, especially for their high-performance (i.e., non-desktop) computing. Since The GIMP is free and often distributed as part of many Linux distributions, there is no reason it cannot be used at NASA.
The more pertinent question is which toolchain was used to operate on the photos our poster has chosen. The answer is likely any or all available. As others have pointed out, most of the photos seem to have come from various convenience sources in various places on the web. Some of them clearly are from Jack White, viz. the ones in which features are circled with ellipses (a Jack White hallmark). Unless one is able to undo or control for the effects of previous manipulations, there is almost nothing of probative value to be extracted by subsequent raster-style image manipulation.
The advantage in using The GIMP is that it's an open-source tool. If there's ever any question about what some particular feature accomplishes, one can turn to the source code to get a definitive answer. To know what Adobe Photoshop is doing, one often has to search through their documentation. And then trust that the code actually conforms to it. For example, our OP says one of the things he did was to use "monochrome," by which I understand he converted the image to monochrome to test some particular hypothesis. But desaturating a color image for forensic purposes is non-trivial. The math easy enough in principle, but the math is necessarily parameterized in ways that compel the investigator to make choices that bear on the validity of the final result to his desired outcome.
Black-and-white photographers were well acquainted with the use of color filters to control which actual wavelengths of light the film sees. If you use a deep red filter, for example, the blue of the sky doesn't penetrate and thus the sky areas of the photo will be dark. Digital photos are almost always already quantized to supply only some wavelength -- relative intensities in certain preselected wavelengths of red, green, and blue. This approximates the energies in the spectrum of the original image. A tuple of the coefficients that apply to each wavelength represents what is stored as the approximation of color for some spot in the image, some pixel. Those coefficients can be transformed into a further reduction to some smaller color space, but there's no One True Wavelength that represents a meaningful monochromatic version of the image. Assuming we start from a triple of red, green, and blue intensity coefficients, the reduction is typically expressed as further weights on those coefficients. The academic literature provides a number of good choices for those weights, but one would have to go into the computer code to see which ones were used and therefore whether the wavelengths represented in the resulting data are appropriate to what you need.