Randolph Kirk

Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES
Interviewed by
Henrik Hargitai
Interview date
Usage Information and Disclaimer
Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Preferred citation

In footnotes or endnotes please cite AIP interviews like this:

Interview of Randolph Kirk by Henrik Hargitai on May 13, 2023,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/48319

For multiple citations, "AIP" is the preferred abbreviation for the location.

Abstract

Interview with Randolph Kirk, Scientist Emeritus at the USGS Astrogeology Science Center. Kirk discusses pursuing his PhD in planetary science, his interest in remote sensing and his thesis work on developing time photoclinometry. He describes in detail many of the changes that have taken place in planetary mapping over the years, as the field has shifted from analog techniques to digital. Kirk talks about the process of working with a software vendor to develop mapping software needed for USGS. He discusses his work on mapping Venus using radar altimetry, as well as the creation of the Venus globe, one of his favorite projects. Kirk describes the workflows of planetary mapping within USGS and the collaboration between draftsmen, geologists, airbrush artists, and other technicians. He talks about the process of deciding on landing sites for new rovers or landers, and he recalls the differences between mapping the landing sites for Mars Pathfinder versus Viking. Other projects mentioned include the Mars Exploration Rover, Curiosity and Cassini. The interview concludes with Kirk explaining the importance of planetary mapping in ensuring that data collected from space missions is synthesized into useable products. 

Transcript

Intro:

Randolph Kirk has participated in many missions to the Moon, Venus, Mars, asteroids, comets, and icy satellites. He helped direct planetary mapping at the USGS since the early 1990s, and has developed practical methods for topomapping by shape from shading and by adapting commercial stereo workstations to planetary use. In this interview he speaks about the techniques and methods of planetary topographic mapping from the analog era through the analog-digital transition and early digital mapping projects. We also focus on traverse planning and mapping, the Pathfinder landing site mapping, radar-based Venus topographic mapping methods and the question of physical planetary globe production.

Hargitai:

Isn't that what you did since the end of 1990s is a continuation of what Sherman Wu did in the '60s and '70s and '80s, so topographic map creation?

Kirk:

In effect, but it also in a technical sense was almost a complete new version. Just to give my background, I got my PhD in 1987 as a planetary scientist, a geophysicist essentially, and I was hired here that fall as a research scientist. But I'd had an interest in remote sensing all along. Part of my thesis work was developing what we called at the time photoclinometry—shape-from-shading is the other term—and that's a topographic mapping technique. It got to be the late 1980s, early 1990s. The generation of mappers here that had been through the Apollo program—Sherman Wu, Ray Batson, and so on—were getting ready to retire. The other thing that was happening was the topographic mapping that Sherman was doing utilized what are called analog plotters, which essentially you have a stereo pair of pieces of film, and a human looking at it matching points to the ground in 3D through their vision, and then a computer that takes care of the geometry.

It's a little more versatile than the first-generation stereoplotters, which were purely, you know, assumed constant parallax, essentially. If you program the machine, it can handle different sensors, and Sherman did some very creative things with that. But they were huge electromechanical machines that he obtained several of them surplus from the defense community. We could never have afforded them new. These were million-dollar machines, and three of them filled an entire room, something like that. So the other thing that was happening, well, two other things, one is these machines were wearing out. The mechanical parts were wearing out, and they were no longer state-of-the-art. There were some options for getting repair parts, but they were expensive. Meanwhile, the softcopy, digital photogrammetry was starting to evolve. Computer power was enough to take over the image-matching as well as the geometry calculations, essentially. We were acutely aware here that whereas people working on softcopy photogrammetry were taking film, and scanning it, and doing the photogrammetry in a computer, we were in a position of having digital images from Mars, mainly, but also other places, and printing them out on film, and sticking them into these optical machines. It seemed like a huge waste of effort to go from digital to film, back to digital again. At that time, they asked me to look into leading an effort for what would be the replacement for that technology, and some of the ideas were simply develop it, what we needed within the framework of our own software. But I had been to some meetings and read a lot of papers on what the state-of-the-art was, and thought there were advantages to going with a commercial solution.

We put together a request for proposals, essentially, with some Mars images, and sent it around to the companies that were active then, and said, "Show us what you can do, making a topographic map of Valles Marineris on Mars, and tell us about your software." We got that information back, and went with—the company was called Helava at the time, and they had a product called SOCET SET. It had among the best capabilities but also some very enthusiastic people. The person who got that assignment to look at our images there really went to town with them and enjoyed it. We had a wonderful working relationship over the years with him and several other people there. We went with the product, we bought it, we started adapting it to our own uses, which mainly means bringing in the kinds of images we have, bringing in the metadata that tells where the camera is, and all of that sort of thing, and sensor models, which are the calculation between the image space and what you're looking at out in the world. Some cameras that we used were just ordinary framing cameras, and there was a generic model, and you put in the focal length and a few other parameters, and you're good to go. Others had to be adapted, and we rewrote. The company changed hands many times. It was Helava, and then it was Leica Helava, and then it was Leica Helava Zeiss. Then it was bought by BAE, which is one of the biggest defense contractors in the world, so it's a tiny subsidiary of that. The other part of the adventure is that around 2003, they had the idea to replace SOCET SET and several other programs that were even more important to their military customers, which were what they call Light Table programs for looking at images, and writing on images, and marking military targets on images, and the date the picture was taken.

If you think of the famous photos from the Cuban Missile Crisis saying, "These are the Russian missiles, and this is the support vehicles, and this was taken, you know, here's before they were installed, and here's after." It's all that kind of thing but digital. They wanted to merge all that software, and we were prepared to transition. It turned out that the Light Table applications were the prime driver, and so that delayed things many years of getting the photogrammetry into it, into the new system. Then it was there, but it was Earth only, and we fought that for years, and then more years of adapting. The amazing fact is that after my retirement just now are we getting to the point where we can use that new product that was conceived in 2003 or so. Twenty years later, we're starting to use it for real work.

Hargitai:

It's like a space mission?

Kirk:

Yes, it really is.

Hargitai:

If I understand correctly, the basic difference between the analog era and the digital era is that in the analog, you do one single line, contour line, but in the digital, it's a continuous coverage pixel-per-pixel.

Kirk:

Yes, that's accurate, but let me expand on that. The analog machines could, in fact, make point measurements. You could say the analyst will go to this mountain peak, and measure the height. Put the cursor on the ground, press a button, the measurement is taken. But it's very inefficient to do that. Yes, contour lines were usually collected, and so the idea, it's like an Etch-A-Sketch, a skill that I never in my life could master. You set a height for the contour, you go to where the cursor meets the ground, apparently, and then you try and use little wheels to drive a curved line along the ground without leaving it vertically. It requires tremendous eye-hand coordination and, yes, you get contours. The image data doesn't quite correspond. Sherman published large numbers of maps that have contours on them, but there's nothing else. There's no images of the ground features to relate it to, and that was a real problem for usability. There was some work done with trying to interpolate the data from the contours into a digital model that would have heights at every point.

They'd be derived in between, and there were artifacts, and so on. But, for example, there was a global digital model of Mars and one of Phobos done in that way. The modern generation softcopy, the fundamentals is the computer is looking at two images, and trying to find corresponding points that then can be converted to a point in the world space of latitude, longitude, height. Typically, they go through in a grid, either in ground space or in one of the images, and find the matches in the other, so you can get a dense grid. The reality is that a lot of the surfaces we look at have a mix of areas with lots of features and very bland areas. Sometimes it's more effective to create a random set of points, and just connect them by triangles, and interpolate. If there's a bland area, it might be a very big triangle with heights around the corners. In detailed areas, these triangles would be as dense as the every pixel grid. But, in the end, we mostly have converted them to a regular grid. We've interpolated and resampled, and made these digital topographic models that are just regularly spaced points in a map projection, with hopefully the majority of them being real data rather than interpolation.

Hargitai:

How much different Venus mapping is Venus altimetry from Mars?

Figure 1 Altimetric Radar Image Map of Venus. 1997, USGS, I/2444 1:50M

Figure 1 Altimetric Radar Image Map of Venus. 1997, USGS, I/2444 1:50M

Kirk:

That is a very different world. Venus, of course, has a dense atmosphere, and so I used to be able to say, for most of my career, there were no optical images of the surface. There's now a small number of images where cameras have been able to see the thermal glow of the surface through the clouds. But it is looking through clouds, and so you don't see local features really. You see the continent-sized features. Everything we know about the surface of Venus comes from radar, in one form or another, initially with a series of missions with the Soviets and with the US doing radar altimetry, which is basically aiming the radar straight down, and timing the return. The distance corresponds to the time. You subtract the distance from the center of Venus, and that's the radius at that point. The problem is that the resolution of that is limited by the beam of the altimeter, and it's usually quite big, initially, 100 kilometers or more, and down to 10 by 20 or something for the last mission that did that kind of work.

The other thing we were involved in was imaging radar. In this case, it's still ranging, but the radar looks to the side and, as the spacecraft moves along the orbit, that's a scan. That's one direction of the image, and the other direction is basically distance measured sideways. But because it's measured on a slant, there's actually a little bit of distortion or parallax from the heights. If you have two of those images, you can start to use them as a stereo pair, and you have to do different math than with an optical image. But the principles are very much the same. We developed pretty elaborate software to work with those images with SOCET SET, and to make topographic maps, and demonstrated it for a few areas. Also demonstrated that as the mission went on, the images had more and more gaps and noise in them, and it got harder and harder to map the areas of the planet that were imaged later on. That became a dead end, and we're now in a debate whether the new Venus missions that are being planned will warrant going back and trying to make these maps. But as opposed to 10 by 20 kilometer resolution, we probably would achieve DTM resolutions of hundreds of meters.

Let me say one thing that I should have said in talking about the software and the process, and it's very general, and that is the key feature, the reason we went with commercial software, SOCET SET, I talked about how the computer is matching points in the two images, and then doing the geometric calculations. It doesn't always succeed. There's been 20 years' further evolution in how to do this image matching, and it still doesn't always succeed. What the commercial packages also offer, which would have been very time-consuming for us to develop on our own, is support for special stereo displays, 3D computer monitors of one form or another, and a 3D input device so that a human can go edit the mistakes that the computer makes. In some of our early projects, we relied on that very heavily.

The radar, like Venus, as I said, some of it's noisy data. There are places that we had to edit a lot. Radar on Saturn's satellite Titan, in many cases, the images were so small and so noisy that all of the data was collected with the editing process by a human. Other projects, the first one we did was the Mars Pathfinder Lander, which was a stereo camera on the ground. Those images were tiny, and so edge effects limit the computer's ability to match. Those images were all edge, and so [laugh] only the human was really successful in finding the ground points. It would've been difficult, time-consuming, tied to specific hardware, which would probably become obsolete, and have to be replaced just about the time you got the software working, if we'd tried to write our own tools for manual editing of these digital products, on top of which it's really boring, compared to looking at planets, and solving the problem of mapping the topography. Writing a graphics editor, a 3D graphics editor is something nobody really wanted to take on. That's why we went the commercial route, and we've relied on it rather heavily. There are lots of other researchers out there, and people use various tools from time to time, and they make an attempt to dispense with the human editing as much as possible. Some systems don't have that provision, and they simply are automated. That's great. It's much faster. It's a rapid first cut at the topography. We can achieve results with images that other people can't match, by having a human edit, and we can press the resolution to smaller features on the same images, often. For the very highest priority projects like assessing landing sites, where you want to know will there be a bump at the size of the lander that's coming, and that will cause it to high-center, and be broken on landing, NASA has paid us to use this more painstaking, more human-oriented approach.

Hargitai:

The question is how the work was organized here at USGS? I've heard that that's the photo processing or the draftsmen were in a different building than this one, than the geologists. To how much extent it was a work of a group of people versus individuals doing their part, and passing it to someone else who they don't have a connection to?

Kirk:

That's a great topic. There were different buildings here at the time I came. There were five buildings, I think. This building we're sitting in was not one of them. This is a replacement for the original building 1 that the scientists were housed in, which was essentially an office facility. Then the photo labs and draftsmen and so on were placed in a second building, and then more buildings were added, which were for non-planetary parts of the group, for the warehouse and machine shop, those sorts of things. This was a replacement for building 1, which became so old that it basically was condemned and couldn't be used. Being in different buildings was not an obstacle to a community working together at all. There were different functional areas, especially of the technical work, and that was driven, I would say, to a lesser extent by having some strong personalities—Sherman Wu, Ray Batson, a few others; Alex Acosta comes to mind—as leaders. They worked reasonably well together, but they had their own specialties. The real reason for that is that the technologies were separate. Topographic maps were made with these analytic plotter machines that I talked about. Image maps were made initially non-digitally in the photo lab, taking digital images, writing them on film in their normal geometry, and then projecting them onto photographic paper, and cutting it out, and gluing the pieces together, not just making a straight print but correcting for the obliqueness of the view for the different distances.

You would change the distance of the printing process to correspond to the distance that the image was taken in, so the images would all fit together. Now, the problem is you end up with a very heavy product that has a whole bunch of overlapping photos and a whole lot of glue attached to a base sheet, and the photos don't really entirely match. The next step of that process was to have very talented artists with high-end airbrushes, literal mechanical airbrushes that would spit ink in a very controllable way. They would basically put a translucent plastic over the image mosaic, and draw a representation of the surface, and they'd look at other images. In any one thing, one image, even the one that was on the mosaic might show a crater in kind of messed up, distorted form, or there could be a shadow or something. These guys would, and women would look at all the available images, and form a mental image, take away the bright and dark materials mentally, and portray what they thought the topography looked like.

Hargitai:

This is why it's in a transparent tracing paper, I think?

Kirk:

That's why the original is transparent, because they did it on top of these photo maps. The photo maps were adjusted in a process that's in a computer, essentially, to get them to correspond as well as possible. We still use that process—it's called geodetic control or bundle adjustment of the images—to get things in the right places. That was the geometric control. Then the mental process of looking at the image in the mosaic and any other images that were available was the portrayal process, and it was drawn on plastic and then, of course, re-photographed, and made into a map. A lot of these shaded relief topographic maps that you see were made that way, and it allows—the artist did all kinds of things to make these look nicer. They could work with a mixture of very high resolution images in some areas, and low resolution in others. If you do that in a photoshop, a photographic laboratory, I mean, not the program, you end up with the low-resolution data. You see the pixels, you see squares where they become kite shapes when they're obliquely projected on the surface, and it looks horrible. The artists would totally hide that. So everything looked smooth and nice, and blended into the high-resolution data, so enormous amounts of brain power over the years going into interpreting these images.

That's a totally separate staff. You've got people collecting contours. You've got people making these photo mosaics, and a photo lab to support them. You've got airbrush artists. These are all separate technologies. The people who brought those together, to some extent, the key engineers, mappers like Ray Bats and Sherman Wu, but, in addition to that, the scientific staff in Astro have always been unusual in the community because they are the interface between the other researchers in the community, the missions that NASA flies, and then all these technical resources. No university department had stereoplotters like that, had airbrush artists. They relied on USGS for that. That meant our scientists were invited to participate in missions in a kind of dual capacity. They very definitely had to have scientific knowledge, scientific study objectives that were important, and that they were selected based on publication record, all the things a scientist has anywhere. But they also had to bring and coordinate this effort to make map products to support the mission. I've seen it happen both before a given mission. The USGS part of the team would assemble everything that was known about the target previously into nice maps. That's similar to what we've done during my career with making landing site maps for surface missions, and then analyzing the data that come from the mission.

Hargitai:

Do you remember how the airbrush maps were replaced by something else? What was the point when they said that we don't need it anymore?

Kirk:

That occurred early in my career, and it was a somewhat gradual process. The evolution of the technology that made that possible was in two places, one, on Earth and, two, elsewhere in the solar system. On Earth, the software to process images in the computer, and not just put them together in a nice way, like Photoshop the program, but to make sure the scales and the distortions and the map locations are accurate, what I described being done with paper and film, before that came along. It was possible to make image mosaics. As I said, in some cases, those look rather strange—and we could go pull examples of this—where it's printed at a scale that preserves the detail in the highest resolution images, and another part of a body you can actually see the pixels, and they're huge, and they're very jagged-looking and, to someone who is used to the airbrush art, rather ugly. But the second thing that was happening is that there's a natural progression in exploration of the solar system from a flyby that gets only a few images, or sees one part of a planet, or sees it from a great distance on the other side, and sees only one side of it close up, to orbiters that start mapping much more systematically. For example, the Magellan at Venus had a north-to-south orbit. It had this radar that I described. But it basically got image strips that were a constant width and a constant resolution from pole to pole. When they were all put together, except for the fact that there were gaps where images hadn't been taken for various reasons, you had a uniform image map of the whole planet, and it was possible to put that together. It looked really nice, and you didn't need an airbrush artist to portray the nice appearance of surface features. The other thing that was going on was the topographic data was going from contour generation to these dense 3D models. It turns out when you have that, you can actually calculate in the computer what the shaded appearance would be. It was probably about the time I got here in the late '80s that the USGS, not Flagstaff but the terrestrial section of it, made a pioneering shaded relief map. I think it was in Hawaii, but I'm not positive. I remember seeing this, anyway, a very pretty map that was done. It looked like an airbrush map, but it was done in the computer from digital height data. We started using that approach as well. Between the density of image data, the ability to make image mosaics showing both shape and bright and dark and even color features that were decent looking in the computer, and the ability to collect dense topo data, there was less and less and less need for these manual processes. Part of that was that then the workflow became more consolidated. The things that were done on the analytic plotter in the photographic room, on the drafting table, with the contour plotters to make maps, all of that became processing, digital processing on the same computer systems.

Hargitai:

Was it at the same time that those workers disappeared who actually made the photomosaics?

Kirk:

Sure. There was a natural process of retirement, and I was not here in the Apollo era, of course. I was in school [laugh], I mean, elementary school [laugh] at that time. Probably there was a process over the years where some of the people who had been in Apollo had retired, and others had been hired to replace them. But, eventually, yes, there was a consolidation. Part of it had to do with facilities; part of it with retirement; part with lack of need. Eventually, the building that had housed our huge photographic lab, and which had special cameras for rephotographing the mosaics and artwork and so on, that could take a two-meter subject, and put it onto a meter-and-a-half film, and then labs to process that size of film, that was eventually torn down. Of course, it was a toxic waste site because it had been a photo lab for so many years, and it had to be cleaned up. Even in the facilities that were retained, that some of this stuff was very space-intensive. You use the analytic plotters, like I said, a couple of them fill a room. You need another whole room for a flatbed plotter that's the size of a couple of pool tables to plot out the contours, and turn it into a map. We downsized just our floor space of regular facilities by a huge amount over the years.

Hargitai:

After the digital era started, what kind of phases the workflow had? What kind of professionals or draftsmen had to be hired? What was the pipeline of creating a topographic map?

Kirk:

The draftsmen became less and less important. In fact, as they retired, they were not replaced, the airbrush artists, the people running the contour plotter, which we got rid of. The workflow and the professional organization really all involved people working on computers, and it broke into about three pieces that you could identify, and the supervision was mixed. These people were working together more than in the earlier era. But, functionally, we have still a group of people who write our own in-house software called ISIS, and it had a predecessor called PICS, and there was a version before that. so image-processing software, which transforms images, projects them onto planets, into map projections, merges them together, does quantitative stuff that Photoshop doesn't do, like measuring spectra. ISIS actually stands for Integrated Software for Imagers and Spectrometers. Although that was its initial focus, data sets that were a full spectrum of the surface at every point from which you could identify materials, that's less of the focus now. We haven't changed the name. We've talked about, over time, talked about changing it. Anyway, that software, which is written in-house, focuses, well, it does what is photogrammetry, which is measurements from images. It does rigorous geometric transformation of images. But it has never focused on the creation of these topographic models. In that sense, it's more two-dimensional than the stereo mapping. Then there were developers developing the in-house software. There was a cadre of analysts who were experienced with using it. The hardest part is getting all the images in the right place. You basically have to do the kind of image-to-image matching I talked about earlier in making topographic models, but only a few points where images overlap next to each other. The idea is to shuffle the images around until they all line up, and you go seamlessly from one image to the next with coherent positions, because basically that information, if you knew where the camera was and where it was pointing, you wouldn't have to shuffle. You'd just go from camera to ground, and it'd be right.

But there's always little errors of measurement, and so we reconcile those. In fact, the software that was written here that does it is called Jigsaw, because the programmer liked jigsaw puzzles, and likened it to putting the pieces together. Those are two of the three functional groups. Then the third was the group I was supervising as a result of having brought in this commercial stereo technology. We were focused on the 3D mapping techniques. There's a huge amount of overlap. We operated as a functional group, with me as a scientist/lead engineer, understanding the geometry; Annie Howington-Kraus, who was an employee of Sherman Wu's, who was absolutely brilliant at everything, managing people, managing projects, making the [laugh] maps herself, and writing the software at my direction that would help us understand certain images, and figure out the workflow to best utilize each new kind of image, and make it into a topographic product; and then other analysts, who did the hands-on work. Again, with the topo stuff, we also had to go through this process of shuffling the images of getting them to line up; only we were checking our work in three dimensions rather than two. You have two images here, and it's a stereo model and heights, and another two images, and it has to agree both horizontally and vertically. The analysts did that. They did the kind of manual editing of flaws that I talked about earlier. There was still some segregation of these groups. I think it continues to become less and less over time. We certainly have made more of an effort than in my early career, for example, to take the people who write software, and get them some experience using the software, making maps, at least 2D maps, if not 3D. We've tried to get more experience between the 2D and 3D worlds of those analysts, learning both the commercial and the in-house software.

By the way, I didn't mention but it comes up as part of the workflow, that in-house, more 2D-oriented software is absolutely essential to what is done in the topographic mapping, because we use that ISIS system to understand the images, to read them in the format supplied by a mission or by NASA archives, to read the labels, to read the best pre-existing info about where the camera was and where it was pointed, which gives us a starting point, which we adjust to figure out where things are on the ground. Often there's artifacts in the images, some parts of the camera are more sensitive than others, and so those are corrected through a radiometric process, and the images look more uniform, and they're easier to use in stereo matching, as well as making nicer image products. All of that image-processing stuff, we use intensively to support the commercial software that makes the 3D models, and lets us edit the 3D models.

Hargitai:

How much overlap or communication are there or is needed with the geologists?

Kirk:

Still quite a lot. We still rely on the idea that I described earlier of getting involved in missions through the scientists being invited. Those scientists then have to, you know, part of their proposal to be on a mission is “I bring these experts with me,” and so they have to talk to these folks and become coupled. It's all a matter of personal interests and style. I was an extreme case, where I went over to spending 80%, 90% of my time on the technical. I really became an engineer from a scientist. Most of our current staff who are involved don't do that. Instead, they spend 80% of their time being scientists, and 20% interacting with the technical staff. It's dependent on the interests of the people that are available to us. But the general model of a close coupling from mission to scientists to technical staff still remains.

Hargitai:

There are those people who make the traverse planning between geologists and the technical staff?

Kirk:

That's an interesting question. The technical people here in Astro mainly have served for surface missions, which need traverse planning. We've mainly provided topographic data before the mission to identify a safe landing site. Typically when there's a new lander or a new rover, the community is asked, and people may volunteer ideas for 100 sites, and those are quickly winnowed down to a few that are at an altitude that's reachable, and a latitude that's reachable, and not in horrendously knobby terrain that anyone can see is a death trap, and so on. Then there's a mapping process, which is twofold in goals. One is to establish the geology, and the science, and the features, and sometimes the composition, and all the reasons you'd want to go there. The other is to establish the topography and the density of small rocks, lander-sized rocks on the surface, and the thermal environment, which can be crucial to a lander, and all the things that affect safety, and the decision is made. That's really where the mapping groups here come in.

Typically the rover missions, it's the Jet Propulsion laboratory that has, yet again, their own set of software for stereo mapping with the rover stereo cameras, its eyes. With rapid turnaround, without the kind of human editing that we do for permanent maps, they establish what is the safe terrain. A lot of that actually happens onboard the rover now too. The rover actually has machine vision that it not only takes images, but it turns them into enough of a representation that it can drive itself around obstacles, and be given much more general directions. But that's tactics, and you asked about traverse planning, which is strategy. To make that more explicit, the rover might be sitting somewhere, and we see a really interesting rock over there. We know it's interesting because the image shows it has crystals in it. The spectral imaging shows it's a different material than any that's been seen. Then the idea is the humans on the team decide to go to that object next, and they decide when it's going to happen, and is it going to be a one-day trip or is it a multi-day trip? Do you need to go to something first? Do you need to turn the rover around, and come at it from the other side? They do all that planning with the software at JPL, and it is most often our scientists who are doing that. It's strategic scientific planning at that level of what the scientific targets are. We're going to visit this rock, we're going to look into that crater, we're going to go here, and we're actually going to take a sample, and do something on board the rover. They make those kinds of decisions. Our technical staff here has been involved in working with getting down images, and assessing them, and so on, but not as heavily with the mapping software for some of the other tasks that we've described.

Hargitai:

Now, I've seen many Apollo-era traverse maps in the [USGS Astrogeology] archive. What do you see as a potential for preserving these digital mapping systems for the future?

Kirk:

That's a really good question. I don't think we've ever had a focus on permanent traverse mapping. It does bring up a related topic, which is that in terms of mapping from orbit, Astrogeology prided itself on slower, more accurate techniques, and that meant often there was a fast process, even in an orbital mission, of putting together some images into something people could look at. We would come along and, whether it was an airbrush map on paper or it was a digital mosaic, we'd make the most accurate product that then would be the history of things. The images obtained by the rover, of course, they're all archived. But we don't have a hand in that, and I can't really answer at what level derived products are archived. Some of the traverse maps that are published, especially nowadays, are from people who specialize in reconstructing what happened. I'm thinking of Phil Stooke, who has published books on this. He basically goes to the base maps that have been created with orbital data here in Flagstaff. But he uses all the information he can find to reconstruct what those traverses were.

Hargitai:

Let me go to the actual projects. First, the earliest is a Venus impact database.

Kirk:

There have been efforts to make databases of at least the largest craters on different bodies, and that's been done by different people. There is an informal group called the Crater Consortium that holds meetings I think annually; if not annually, every couple of years. They get together and talk about the databases that different people have compiled. Of course, in addition to the location and the size of craters, as long as they're looking at them, they write down other things. Is it fresh? Is it old? Can you see the ejecta? Is there a sharp rim? Are there flows of melt from the material? Is there a central peak? Different modifications of craters are relevant to different kinds of bodies. These databases have that kind of information.

But, by and large, they're set up by different scientists, not all here in Astro. We've had a role in coordinating that and arranging those Crater Consortium meetings. For that, the person to talk to is probably Trent Hare, because he's a good person to talk to for everything—along with Annie, also a person with many hats, Annie Howington, who I mentioned earlier. He's also someone hired originally by Sherman Wu, and who has capabilities as a software programmer, as a photogrammetrist, as a GIS expert, as a standards expert. Just this place would not be the same without him. The geologic mapping quadrangles, that process has always been managed out of Flagstaff, and it's in part because the end state in the past was to go have paper maps printed, which the Survey did back on the East Coast. The goal would be that the documentation is in the end product. It's a map with a lot of text around the map and geologic column but also explanatory text, and it names the authors. There's always been a problem that people get funded to make these maps, and they run out of time, or they lose the students that were active helping them, or they run into problems, and things sit unfinished.

Jim Skinner manages that project now. Trent Hare is involved. Ken Tanaka was the manager for a long time, and these guys trained under him. They're the ones that will know best about the history of geologic maps. In particular, what happened with maps that fell by the wayside, Jim would be the person for that, and how multiple maps by different authors were reconciled into a global map. That's a difficult effort. They did a similar project, going back to the Apollo era, geologic map of the moon, which was done in hemispheres, essentially, but six pieces like the sides of a cube. They didn't fit together. The standards were different. The positional accuracy was different. That all had to be digitized and adjusted and corrected with modern knowledge to come up with a map that was not only digital but seamless globally. There's a bunch of those projects that have been done, and I was not directly involved in them, so I'm not the best person to ask about them.

Hargitai:

I've seen several Pathfinder publications from you.

Kirk:

Yes.

Hargitai:

My question is, how Pathfinder landing site mapping might have been different from Viking landing site mapping? In addition to the stereo camera that you already mentioned, what was in the new—?

Kirk:

We were involved in that mission both before and after landing, and so I'll talk about both. The before is an interesting thing. The resource for choosing and validating as safe the Pathfinder landing site, the data was the same as for Viking. The Viking orbiters were designed to hold onto the landers, take images, find a safe site, and then release the lander. That's how that was done in the '70s. Twenty years later, those images that had been taken over the course of the mission we're the best, the only resource for assessing these sites. The difference was in how we used them. Instead of analog plotters, we were doing—no, we were still using the analog plotters for stereo then, I'm sorry. It was the same images and the same technology. But, to some extent, it was aided by the shape-from-shading, photoclinometry technique that I mentioned earlier, which is not a stereo technique. It's looking at bright and dark as reflecting different slopes, and trying to build up a shape model from that. It has the advantage that it can resolve the topography of features as small as the image sees, whereas the stereo matches images together, and you need a bunch of pixels to make a recognizable feature.

Then you recognize all those same pixels in the other image so that the resolution of stereo is poor. This was the first case where we used the photoclinometry technique to try and assess the slopes of the surface that were dangerous to the lander, at the highest resolution we could, which was about 40 meters. Then after the landing, as I said, it was a fixed lander. It had a dual camera on a mast, and the whole camera, with two eyes, could point in any direction. It took hundreds of images to make a full view of the landing site. We did use the digital photogrammetry technology. It was our first project with that to try and map the landing site, and it was really hard. It was much harder than it would've been to start with something like Viking images, because the images were tiny. The thing about being on the ground is that when you look lower down, right at what's near the lander, it's almost like an aerial photograph but slightly oblique. But as you look out towards the horizon, the distance goes extremely large, the stereo is no longer accurate, and there's a gradient. The bottom edge of the images is much closer than the far edge, so it's difficult. It even could be difficult for human viewing, in some cases, to look at these stereo things. You couldn't really see in 3D the near stuff and the far stuff at the same time, and the computer had the same problem. Then building up a map of the landing site out of hundreds of images with thousands of points measured on each one, that was very difficult. But it was sure a learning experience for all of us to tackle that.

Hargitai:

The next is the Mars Exploration Rover and the Curiosity landing site selection. I've seen many Apollo landing site selection maps and descriptions and process, and some Viking—little booklets with potential Viking landing sites, with quick geologic maps. In these Mars Exploration Rover landing site selection papers, it was lots of ellipses on a topographic basis. It seemed to me that geology was more important in the Viking landing and the Apollo landing site selection than in the exploration in the MER, or Curiosity, or maybe it's incorrect.

Kirk:

No. I think there's still a dual process. Boy, I have gone to a lot of meetings in my life, landing site selection meetings, because typically there'd be, you know, every six months leading up to a mission like that, there'd be meetings. They would start by talking almost entirely about geology, and scientists proposing, "This is my favorite site. Please consider it," and as the number of sites got smaller, moving more towards the safety aspects. Geologic mapping is definitely still important. The key references there are, there's usually an overall landing site selection paper. Matthew Golombek at JPL has been intimately involved in almost every round of that, and coordinating that process. It's one of his specializations. Of course, he was the project scientist for Mars Pathfinder, and that was the first time that we did that process in my career. But it's become refined as we have higher and higher resolution image data, and it's become much more painstaking. We were glad in the Pathfinder days we had only Viking for the MER landers. We had samples from the Mars Orbiter camera, which was three meters per pixel, and either stereo or the photoclinometry. We were happy to get a sample of each terrain type in the area, and measure its roughness, and then extrapolate to the landing area as a whole. Nowadays, they get quarter-meter-per-pixel images from HiRISE, and demand a stereo model that's actually built up of coverage data over the entire ellipse, and put together seamlessly at that level. But that's the Flagstaff contribution. The geologic mapping is still going on. Identifying the terrain types just to get the percentage of safe and unsafe areas is on the fringes of geologic mapping. It's done by geologists. But the geologic history of the site, and why you should go there, and where the samples that might show ancient water, even ancient life, which are the goals nowadays on Mars, that's all geologic mapping. It's done by people in the community, and managed, as I said, through these sets of meetings. There's a whole other aspect too.

I've been involved—the people that talk about the topography and the safety and so on of that are called the Council of Terrains, which I think is fantastically pompous, and I was always proud to have that associated [laugh] with my name. But it's totally informal. It's just the people who talk about these things. But these missions also have a Council of Atmospheres, because it turns out some of the sites that are interesting to go to, and maybe the terrain doesn't look so scary, the weather is bad, and there may be high winds that prevent you from landing there. That has to be assessed too. It draws on the global topography but not so much on what we do here.

Hargitai:

Then the difference, from my perspective, might be that, in the past, there were more final paper map products of these discussions?

Kirk:

I think that's definitely the case, yes. The standard is no longer a paper map. For these landing site assessments, by and large, it may not even be a digital map, or at least not for every site that was considered. In the Astrogeology website, we have the image mosaics, the DTMs for these various landing sites that were selected, especially the newest ones where the whole site was mapped, and it was done seamlessly. Those products are there in digital form. The geologic maps are going to be the responsibility of the individual authors. I think, by and large, they get published in as figures in papers; not as full size. It's a good question and, again, Trent is the person I would ask about this, if he were handy, whether there are digital copies of those geologic and terrain classification maps from these missions. For people who look at the paper, and say, "This is great, but it's only 10 centimeters square. I want to merge this data set with other information, and really play with it," it may well be accessible, but it isn't going to be printed. We haven't done that in more than 10 years. There've been in the last 10 years only a handful of synoptic maps, a new topographic map of the moon, some image mosaics and geologic maps of large satellites. But it's rare nowadays.

Hargitai:

THEMIS was a controlled global map of Mars. What's the importance of it?

Kirk:

There are two areas of significance for that data set. One is that it was a step forward in the available resolution for a global coverage of Mars. The previous standard was the Viking Orbiter images, which had some very high-res images in a few areas. But the global coverage was hundreds of meters per pixel, and THEMIS is about 100 meters, if I recall correctly—I haven't worked with it in years—and obtained systematically. It was possible to make an image mosaic at several times higher resolution than before. The other is that it is a thermal camera, and there are day and night imaging, and it allows you to understand some of the material properties of the surface as well as just the appearance. In particular, how rapidly things cool off at night has to do—it's called thermal inertia, and the thing it depends on most is essentially how dense the surface material is. You can tell bedrock from sediment from fluffy dust with these images. It's an additional clue into what's there. The next step forward after that is the Context Camera, about a six-meter-per-pixel camera, on Mars Reconnaissance Orbiter. It is getting to the point after many years of operation of building up global coverage.

Hargitai:

Two questions is more personal. What do you consider the most difficult project?

Kirk:

I think, in a lot of ways, Mars Pathfinder was the most difficult, because it was pioneering, as I've already described, and so I'll resist the urge to repeat myself. We were just starting to use the digital photogrammetric capability, plus it was surface data, and we didn't have all the tools we needed to work with that, and we still don't. It was a case where we were involved both with the orbital mapping before site selection, and with mapping on the ground. We used all of the pieces of technology that we had, and many of them for the first time. Other missions that were especially complex, the one that stands out in my mind is the Cassini radar, which I was involved with. Its focus was mapping Saturn's satellite Titan. Cassini was orbiting Saturn but only flying by Titan, so it would get these strips of radar imagery on each flyby, and we had to try and piece them together. We developed software to do topographic mapping from radar stereo with that, which was very hard because the images were not all that sharp. Only the human brain could make sense of them, and even then it was difficult.

Hargitai:

Wasn't it a good opportunity to invite the airbrush artists back?

Kirk:

We did not ever do that. We eventually arrived at pretty decent-looking digital mosaics because we were able to mosaic these synthetic aperture, high-resolution radar image strips. Then there was a lower resolution radar scatterometry that we could place behind that to fill the remaining holes. Yes, an airbrush artist could have made it look cosmetically better, but we got a lot out of the data. Of course, the mission had all these other instruments. There were some optical images at a few wavelengths that could see through the atmosphere, infrared spectrometry that could see the surface, other information to be integrated, so vibrant science that was interacting with trying to make out what was going on in the images, and just a really complicated world with more processes acting than anywhere else except the Earth.

Hargitai:

Is there any favorite location of all the areas that you have mapped?

Kirk:

Again, my favorite body is Titan for sure, because it's so complicated. Don't ask me if I'd like to physically go there. No, heck no, because it's—

Hargitai:

Cold.

Kirk:

—incredibly cold and incredibly toxic. All these materials on the surface, tholins killed off a large number of planetary scientists [laugh] who worked with them and studied them in the lab. They're these complicated organic molecules you get when you have simple hydrocarbons and nitrogen, and they're exposed to energy and so on. The brown glop in the smog of Titan and on the surface is this stuff. But there's also cyanide, and there's benzene. Just everything there is toxic. You'd never get your spacesuit clean enough [laugh] after walking on the surface of Titan to be safe again. But mentally I would love to go there, and see rivers of liquid hydrocarbon at hundreds of degrees below zero, and oceans, and vast sand dunes, and so on. A fantastic place.

Hargitai:

The last question is that your education is based on physics, if I'm correct, so how did you move towards—so what was the influence towards planetary?

Kirk:

I'm a baby boomer, a late baby boomer. I grew up in the 1960s, and I was fascinated by space exploration. I had a scientific interest from the start. My parents were biologists. They're scientists but not planetary. I knew I was going to become a scientist. I was interested in astronomy, I was interested in physics, I was interested in geology, and I was interested in space exploration as exploration as well as science. That's really probably the thread that continued through my career. It made me start doing things like, even as an undergraduate, learning more and more about image processing, and working with it, because that's a big part of what we do, and learning about stereo and so on. I went to graduate school, and became a planetary scientist doing physics-related things—so you can call that geophysics—but still with part of my time spent on these remote sensing techniques. I was very happy when there was a job opening here in Flagstaff at the time I graduated, and I applied, and came here and, as I said, hired as a research scientist. But my creativity is stronger in the engineering areas than it is in the science areas, frankly, and so I was happier moving more into those areas. It satisfied the need to explore, and to do complicated technical things, and still to be in the middle of all this great science, and at least contributing to it.

Hargitai:

The globe set?

Figure 2 Venus Magellan Globe

Figure 2 Venus Magellan Globe

Kirk:

When we're talking about the Venus globe, the rainbow-colored one, which, it occurred to me to mention, is one of my favorite projects I was involved in. I didn't lead the gathering of any of the data sets in that, but I did the processing that made that globe image, pretty much single-handedly, and it was a first experience of its kind. What's going on there is that it is a radar image that has been colored to represent elevation. But what was fun to me in making it was that I used every mission data set I could to make it as gap-free as possible. The main image is a mosaic of all of the Magellan images, which were the highest resolution and most global data set. The next image under that comes from the Soviet Venera missions; 15 and 16 had synthetic aperture radars at a little bit lower resolution than Magellan, and they filled in a lot of the Northern Hemisphere gaps. There's lower resolution data still from US Pioneer Venus that I filled. That left only a tiny gap at the South Pole, which I matched a shade of gray to [laugh] to fill so it wouldn't have a black hole there, which maybe isn't the right way to do it. But it's esthetic, and it was only tiny area. The same thing with the elevations. The Magellan altimetry covered most of the planet. The Venera altimetry covered a lot of the gaps that remained. The Pioneer altimetry covered almost everything after that. It was my first time working with big map images which used the USGS software to get all the pixels in the right place. But then I stacked them together, and merged them in Adobe Photoshop, and did it in a seamless way, and colorized, you know, turned the altimetry to color, and overlaid it on the images. It probably will take me a few minutes to find areas where these lower-res data sets are filling holes, other than the South Pole, which I know of. From a distance, it looks like a complete seamless map of Venus, and it still gets used. There have been new images of Venus in the news recently, and they show this map that we made for the globe as a comparison. You can still buy them. In that sense, unlike a lot of my other experiences, it was a process that came to a complete conclusion, and a deliverable product that's great, and people still use, so it's very satisfying.

Hargitai:

Was it the first that used the rainbow ramp for elevation?

Kirk:

No, I don't think so. If you look at the big globes here, there was an Earth topographic globe. I didn't invent the rainbow. There were various maps done that way. It's a funny story and a digression about Venus. Radar does not see any of the—it's a completely different wavelength. It corresponds to material differences, but it does not see bright, darker colors that the human eye does. A lot of the images out of Magellan were tinted bright orange. Especially the perspective views and simulated surface views were tinted bright orange, because the Soviets took actual optical images on the surface, and those were orange. But they're not orange because Venus is made of orange rocks like Mars is. Venus is probably pretty gray for the most part. It's the cloud layer that filters out the blue light, and makes the surface look orange. It went from an idea about what it would look like to be on Venus, being rather orangey, to large numbers of orange products produced. There was an orange Venus globe, but it was just the images tinted orange, and we made a limited number of those by hand. Then we worked with a globe-making company, and we made the base image for this rainbow one, and I put a color scheme on to represent height. There's a detail there, too, that if you just take a straight rainbow with equal amounts of red, yellow, green, blue, et cetera, you end up with a globe that's almost entirely bluish-purple. What I actually did was I stretched this rainbow scale such that on Venus there would be about equal areas of each color. Red corresponds to a larger range. There's not much high elevation which would be red. I go from the top, lower down. I make more reds in the color scale, so there'll be more red on Venus, and you can find the red high areas. Conversely, the blues are compressed to show just the lowland plains in a smaller range of elevations in those than they would be in a straight rainbow. There was some fun technical work in doing that as well.

Hargitai:

Wasn't there a problem? The radar’s typically bright, and the radar’s typically dark surface areas versus the colors—?

Kirk:

That's always a problem, yes, and I can't say it's perfectly satisfied there. There was a certain amount of latitude in trying to merge the color onto the image to make sure that the color is visible, but it's not completely effective. You can see a lot of the areas, if you look at it, the areas that aren't blue are not bright red either. They're brownish to whiteish, and that's because the brightness image is coming through. It turns out Venus is more radar-reflective at high altitudes. It's partly a roughness effect of the highlands are rougher, and partly a chemical change that makes them more radar-reflective. Those areas that should be colored red with high altitudes tend to be brown to pink to white. But at least if you have trichromatic color vision, you can still tell the highlands from the lowlands. It's a whole separate topic that all of this doesn't address the needs of people who have restricted color vision. You cannot make a brightness and color-coded map that every human can understand, unfortunately, because we have different color visions in this.

Hargitai:

This globe is more outreach project or educational? What's the scientific value of it?

Kirk:

There's always been a justification for this type of product, whether it's the global printed maps or the globes. Scientists use them for a number of things. At some scale, they can be used as context to understand local studies. They can certainly be used as outreach to your scientific colleagues. I'm talking more about the digital images or the paper maps than the actual three-dimensional globes. People start their talks by showing the global altimetry of Mars or Venus or whatever, and say, "I'm going to talk about this area here, and it's in context that it's down slope from this volcano and so on." Those kinds of global data sets are good for that. The globes, to be honest, it's a prestige product, but globes always have been a prestige product. You go to Europe, and tour the cities like Vienna or Rome or Amsterdam or somewhere like that, they have these old globes. Some are enormous so that people could actually see map details of their own countries on a globe format. But it's not super practical. The king wanted a globe because it proved that he was a king that could afford a two-meter globe of the world [laugh] to show where his navies had been. To some extent, it's the same thing for space agencies. But it's useful for public outreach. I guess what I'm saying is the individual scientists like to have the globes in their offices. They like to have the global colorful maps on their walls. I think people stare at them, and they get ideas, so it's useful in that sense. But it's a prestige product.

Hargitai:

I've seen a study a few years ago. It was a USGS study. They asked if they have maps on the walls, the scientists. Then, I don't know, more than 60% said that they do. That's also part of the archival problem, that there are no more paper maps.

Kirk:

Right.

Hargitai:

What are you going to put on the wall?

Kirk:

That's why we do still produce, even in the current decade, occasionally produce new synoptic global maps.

Hargitai:

For National Geographic, they'll do—

Kirk:

They do theirs, yes. But we've done moon maps, and we've done Io and Europa and other bodies that are involved with the current study.

Hargitai:

May I ask one last question—

Kirk:

Sure.

Hargitai:

—that's maybe a political question. If you would need to convince congressmen to finance your project's planetary mapping, what's the argument here? What can be said to spend public money on this outer world project?

Kirk:

On the missions or the mapping element of it?

Hargitai:

The mapping for missions or mapping of mission data.

Kirk:

It could be a very political issue justifying planetary research, depending who you're talking to, and their mindset about spending money on research, generally on whether they're—how they feel about climate change on Earth, and whether topics that are adjacent to that, which planetary science is, and whether that makes them more or less desirable. But in terms of the mapping element of the missions, assuming the missions are proceeding, the argument we make is that what we do is the final record and the synthesis that makes the data usable in the future, both for scientific study, for outreach, and for planning of the next generation of missions. To have hundreds of magnetic tapes with narrow strips of images of Venus, nobody looks at those. They're only a few hundred pixels wide, even though they go essentially pole to pole. But the mosaics and the global map are almost seamless, and are something that even a layperson can look at and understand. It's something you can go to one data set, and go, "I want to look at this area. Here's my piece of data." Whether that's to write a paper about it, to plan a new mission, write a school report, whatever, the synthesized data is useful for that.

Other bodies where we're taking photographs rather than radar, we don't even have strips, so now it's postage stamps, and the higher the resolution is, the more they show. But each one is just a tiny area. People look at those. The HiRISE images are all on the web, and you can look at them. The HiRISE local topographic products are. But to get any sense of the body as a place you could visit, you really need the mapping synthesis of that. We talked about the CTX Mars thing earlier where essentially it's like Google Earth. You can fly over the surface of Mars, and look at it. It enhances your ability to understand what we've found there, and to find new things. A scientist can see the processes and the context and the clues that lead him or her to study features locally; can see those across broader regions. Even a person who's not trained in science can see the world as a world, instead of as a collection of digital files that they don't even know where to go to look at.

[END]