Transforming Terrain—How AI is Shaping the Future of Geospatial Intelligence: A Conversation With Frank Whitworth
Vice Admiral Frank ‘Trey’ Whitworth, director of the National Geospatial-Intelligence Agency (NGA), explores the NGA’s current initiatives and how AI is transforming the field of geospatial intelligence.
CALABRESI: Welcome, everybody. Welcome to today’s Council on Foreign Relations meeting with Vice Admiral Frank Whitworth.
I’m Massimo Calabresi from Time magazine, Washington bureau chief, and I’ll be presiding today. Brief introduction, then I’ll engage in some questions and then come to you all for yours.
Vice Admiral Frank Whitworth is the eighth director of the National Geospatial-Intelligence Agency, a role he’s held since June 3, 2022. He’s a graduate of Duke University and Georgetown University, and a member of the Council. Whitworth has had a distinguished career with key leadership roles in intelligence for the U.S. military including command positions at the Joint Intelligence Center for U.S. Central Command. His numerous awards include Defense Superior Service Medal, the Brown Star, and France’s Médaille de la Défense Nationale.
So, Vice Admiral, thank you for doing this.
WHITWORTH: Thank you.
CALABRESI: It’s a fascinating time to be having this conversation.
Since we’re on the record and speaking to a broad audience, let me start by asking you just to define a little bit the work of NGA and, you know, the main recipients of your products. It may be helpful as we turn in a little while to AI to get a sense of the size of your organization staff budget to the extent you’re able to talk about those sorts of things.
WHITWORTH: Great. Thanks for doing this. Thanks to everyone for being here. I’m honored to be here at the Council.
National Geospatial-Intelligence Agency—when we think about the mission, if there are three words to memorize they would be targeting, warning, safety, and that latter one safety, safety of navigation. We’ll detail what that means but just think about targeting, warning, safety of navigation.
I am the functional manager as delegated by the director of national intelligence for GEOINT, which is effectively the visual domain—intelligence derived from the visual domain. So think imagery that comes from satellites, imagery that could come from any sorts of sources. If there is an understanding to be drawn, if there’s a standard to be made relative to that type of intelligence, that’s where I am supposed to be in charge.
And then we run as well an organization,. The organization, to your question on the scale and the people, 16,000 people approximately. About 9,000 are in the northern Virginia area in a beautiful building that is just about ten miles or so from the Pentagon. It’s technically in Fort Belvoir. Thirty-five hundred people are in St. Louis in a historic building that is right on the river on the Mississippi.
And if we have time we can talk about the new building that we’re about to open up actually in St. Louis and that’s bringing in an entire ecosystem in middle America that is really wonderful.
About 500 people are in Denver, a really specialized group of people dedicated especially to a special niche for warning, and then there’s the swing arm that I am very excited to talk about because this swing arm of about a thousand to 1,500 people really make us a combat support agency.
We are an intelligence agency. Make no mistake, I’m part of the IC. I do report and listen to and take guidance from the director of national intelligence. But we are a combat support agency, meaning we’re part of DOD.
The paychecks for those civilians in that 16,000 and, of course, our military—it’s only about 3 percent military. Those paychecks come from DOD. So I also report very much to the secretary of defense by way of the undersecretary of defense for intelligence and security.
So that swing arm I was talking about it’s about 1,500 people or so who are actually at the combatant commands in the services, in the joint task forces, they’re forward. And so if there’s visitation that I do when I circulate out there it’s really to see them, see how they’re doing, how they’re interacting with those commanders.
So, as you know, we’ve got the combatant commands. Think CENTCOM. Think EUCOM. Think SPACECOM, et cetera—an average of somewhere between eighty to a hundred people per and some of your more, I would just say, active combatant commands you might have as many as a hundred and twenty, a hundred and thirty.
But the advantage of that is they’re breathing the same air as that combatant commander. They’re listening to what he or she needs and they’re delivering, and they don’t have to come back to me to ask permission to do something for that commander. We consider them effectively assigned.
CALABRESI: Got it. That’s very helpful.
I am, if you don’t mind, going to jump right onto AI since that’s both the topic and the topic of the day, more broadly. How is it being incorporated?
You’ve been very aggressive both on procurement and acquisition but also on tasking, and describe a little bit how you’ve incorporated it, where you think it’s going, and I’ll just tack on if you can give us some use cases.
You know, there’s been reporting on various integrity on its applications—the U.S. AI intelligence application in Ukraine and in the Middle East. So if you can address some of those use cases, or use cases elsewhere if you can’t do those.
WHITWORTH: Right. We’ll use the basis of those three words that I started with—targeting, warning, and safety of navigation—to go through applications of AI because I think it helps.
So I need to describe a little bit of each. Targeting—when you’re talking about targeting, it’s a cycle. It’s effectively a business cycle, to be quite honest. It’s a good cycle. It’s well founded and it begins with objectives and guidance but it moves into target development which also involves target discovery.
And so you have to actually discover something that is eligible and valid from either your rules of engagement or the commander’s guidance and objectives, and that brings in some principles that we hold dear as Americans and especially the American military. You know, laws of armed conflict apply such as humanity, necessity, proportionality, and as Americans we’re reared to know those things.
Distinctions differ, and so the coin that I give people I’m proud of and I just want to recognize that they’re doing a great job that coin says NGA: Vanguard of Distinction.
Distinction is difficult. Distinction is where you’re saying to someone who’s about to make a decision either for a plan or for the engagement against a place on the Earth you’re saying I’m helping you distinguish that this is combatant as opposed to noncombatant. This is enemy as opposed to non-enemy.
And we take that really seriously in the United States, and so we render at NGA more positive identification calls than just about any agency, certainly, in the United States. I can’t speak for the world. So—
CALABRESI: All right. What does it mean positive identification?
WHITWORTH: That means—that that gets to this issue of distinction where you’re telling—
CALABRESI: You have the confidence that you’re hitting what you think you’re hitting.
WHITWORTH: Exactly. It’s about I’m going to inform you of my level of confidence that I’m distinguishing this from especially a noncombatant and identity is very, very important here.
So imagine, if you will, the scale and at least targeting tradecraft where these are things that are probably trying to be hidden from you, and we’ve got a tremendous growth in the amount of data. When I say data think images. OK. So images are data. They’re ones and zeros. They just happen to be images, which makes them very dense. So it’s a lot of data to go through.
And as Americans we’re investing in more and more sources of data so that constellation from space is growing. So we employ something called computer vision. Everyone has heard of ChatGPT. They’ve heard of large language models which involve text and that’s very powerful.
Our specialty is in something a little bit different and that’s computer vision, which is the recognition of an object and a relationship—kind of a human and machine team relationship that goes on, which continues refining the model that goes through all that data.
CALABRESI: But this is AI in the sense of—
WHITWORTH: This is using AI and machine learning. It starts the machine learning—
CALABRESI: Is this Project Maven, or is it—
WHITWORTH: We’re getting there.
CALABRESI: OK.
WHITWORTH: Yeah, we’re getting there. I just want to make sure everyone understands. You know, I’m not going to skip right to Maven and then forget about the basics of computer vision.
And so when an individual, a human, identifies something—and we’ve got people who’ve been doing this for thirty or forty years—that’s an advantage that you want to share with the model through a process called data labeling.
That’s a little bit different than just typing context into what we call structured observations. It’s a little bit different. It’s something that translates directly to the model. But it’s kept in a very organized way that the model ultimately can learn and improve upon its review of future data, future images.
CALABRESI: So the human judgment is weighted more heavily in the processing by the AI. Is that it?
WHITWORTH: So when you’re—so scale is what—and speed become really the watch words here. When you’re thinking about preparation for a campaign or readiness you want to ensure that you’re fast and you want to make sure that you are complete and you’re not intimidated by the size of the area you’re looking at.
When all of us go through Global Entry or you go through TSA and you’re really impressed by that camera that looks at you and it says proceed, you’re good, I mean, that’s tremendous. We love that technology. That’s computer vision.
If it were only so easy. Number one, you’re voluntarily presenting yourself. Number two, your face is about 80 percent of the field of view of that particular sensor. We don’t have that luxury. We’re taking images from space, in most cases.
And while this is a little bit fuzzed up in math, because I’m not going to give exact math where everybody’s going to sit there and go to the resolution that I’m talking about, we’re looking for objects that are distinct or behaviors that are distinct, that are two one hundred thousandths of a percent of the field of view.
OK. So now you get to understand why you need an extra advantage. As the amount of data goes up, as the number of sensors goes up, the data needs to be culled through and you’re looking for very, very minute things in areas that, frankly, they’re trying to hide from you.
So for targeting that is the essence of target discovery at scale and at speeds. Do I think that it’s going to become ultimately autonomous? Don’t know. I think that that’s certainly not an American way of war for offensive operations. I think it is probably on defensive. I defer to my chain of command on that.
But we are certainly wanting to always improve upon that positive identification. This is where Maven has made an impact. Maven has allowed that process of discovery to be faster, to be complete, to never ignore the role of the human, to inform the human, especially the target engagement authority of levels of confidence, and to do it in a way where it’s coded with a graphical user interface that is agile.
CALABRESI: And just to be clear, Maven is the AI product that you bring to bear on the computer vision?
WHITWORTH: It’s a graphical user interface and ultimately a decision process. OK. So I would like to say that it’s a process that fits into the targeting process specifically, and where the AI and computer vision makes its money is actually in that target discovery piece and putting it all in a place where it is discoverable relatively easily to a user.
CALABRESI: So before I jump into all of the complications that come with that, whether that’s state to state or public-private, just on the use case scenario is that computer vision and just with fully disclosing my journalist hat on is Maven and that process of computer vision being brought to bear in case use—use case scenarios in Ukraine and in Israel’s fight in the Middle East?
WHITWORTH: Yeah. So I don’t think in this venue I would detail exactly where it’s being used for real-world operations except just to say we’ve got really happy four-star commanders out there in places like U.S. EUCOM, U.S. CENTCOM, U.S. SPACECOM, and so that’s about as detailed as I’d really like to go into in real-world operations. But this isn’t an experiment.
CALABRESI: Right.
WHITWORTH: OK, and so it’s being tested and constantly improved.
The code itself allows the entire cycle to be involved in this graphical user interface which is really powerful for anyone who has been in the military or is currently in the military. That brings two worlds, really, three or four worlds together. It brings in intelligence worlds. It brings in operational worlds. It brings in logistics worlds.
So as you’re trying to assess not only about a target but where are you also in your availability of a munition, where are you in what is available for this munition in terms of a strike asset, and so I think that that’s another aspect of what we call NGA Maven and by extension this graphical user interface called Maven Smart System. That’s what I am seeing in terms of its popularity at those COCOMs is how agile the code is for anyone, not just people on the intel side.
CALABRESI: And can I ask—so just getting then, having, you know, moved past the nation to nation question—maybe come back to that a little later in questions out here—the public-private, how much are you describing a system that has been purpose built inside NGA or DOD, more broadly, and how much is this being outsourced to contractors?
We did a cover story—I’d commend everybody—on Palantir in Ukraine. They’re, obviously, eagerly involved. How much have you—describe that relationship.
WHITWORTH: Yeah. So you’ve described one prime, certainly, but if they were here I think they’d be very quick to say—and this isn’t a bailiwick but they’d be very quick to say there are another 12 subs involved.
So this has been kind of the evolution of, hey, if there’s best in breed in a particular aspect of the targeting cycle they’re going to be involved in the graphical user interface in that system. So on—
CALABRESI: So their product is integrated into Maven. Is that it?
WHITWORTH: Theirs being—
CALABRESI: Palantir’s, or any of these.
WHITWORTH: Oh, sure. Oh, sure.
Yeah. As I say, you’ve described one big prime.
CALABRESI: Yeah. Yeah. Yeah.
WHITWORTH: But there are other subs.
CALABRESI: OK. So let me just jump in on that.
Like, values central to your mission, as you describe, and to America’s strategic interests worldwide it’s a whole other thing, it seems to me if, you know, you’ve got primes running the AI that’s providing some level of targeting data there.
Do they have to adhere to the same national security memoranda and other requirements that the administration has started to put in place to ensure that you’re adhering to America’s values?
WHITWORTH: Well, from an accountability perspective, you know, don’t forget there’s someone who’s actually making the hire, making the decision as to, you know, at the time of this is an acceptable product or not and this is where we fit as well.
So for anyone who’s wondering about standards, especially some of those principles that I mentioned before, absolutely, and we’ll get into some things like certification and accreditation that I think should be pleasant news for anyone who’s concerned about this particular technology.
So on this code and the availability you mentioned how did we happen to get and become a part at NGA and I should tell that story, very quickly.
So Maven began as a project. It was called Project Maven and it was really—the idea was to excite the defense sector, see in a very unclassified way where there might be applications for AI commercially in defense, and it was very successful. Everyone could see that this is something we need to gnaw on when we need to refine.
And so the decision was taken just a year and a half, almost two years ago, not quite, to find a sponsor where this thing needs to really become more of a project—more of a program as opposed to a project. It had been a project. Needed to be a program of record. And the undersecretary of defense for intelligence and security made the bold decision that that should be NGA and to this day I think that was the right choice.
I’m a big believer in going for that which is hard first. We have the most data. This computer vision stuff—when you’re talking about this much data you’re going to—you’re kind of going for the hardest thing when you say let’s invest in the time and energy of NGA because they’ve got a lot of data and so let’s go ahead and do that.
So that’s how it became at least NGA Maven.
CALABRESI: Got it.
WHITWORTH: And then we took less than a year to go through all the machinations to ensure that we met the standards of being a program of record and that’s what it is now, and that really from a fiduciary perspective, I think, is key where we’re now really, really concentrating on ensuring we’re not paying twice, that we’re really responsible stewards of every dollar that goes into its development as you would for any other program of record but also to look at how this truly can become an instrument of national power.
CALABRESI: So let me ask about that, too, because, obviously, a big subject of conversation right now is efficiency in government. There have been, you know, multiple attempts to start to wrestle with what everybody I think accepts as an outdated defense acquisition approach.
Also, transaction authority was going to be the thing. You know, now you’ve used something called commercial solutions opening.
WHITWORTH: Those, yes.
CALABRESI: You’re the first NGA to—so this is—are these just little cracks in the acquisition dam and it’s, you know, all going to have to be torn up and remade? Or especially given your reliance on the commercial sector, whether that’s the actual sensors in space or data miners is this—or, you know, is this just fiddling on the margin, as somebody who’s lived that acquisition transition?
WHITWORTH: Yeah. So my view right now just as a person who cares, not to mention being a leader, is that this is not on the margins. This is not for show. That the speed of acquisition everyone really does want to improve and CSOs are a great step.
And I’m very pleased to tell you that we do have a CSO. It’s called Project Aigir that’s dedicated to having pre-vetted vendors that are ready to go—contract effectively already ready to go for the time that you need commercial analytics.
We don’t buy pixels. OK. NRO buys pixels for us but we do buy analytics and that that gap between a pixel and an analysis is getting really narrow and so we’re spending a lot of money on analysis right now.
And so you may have heard about Luno A and B we’ve moved out with the identification for Luno A with 10, you know, really good, vetted companies that are going to help us with that commercial application, which is, largely, dedicated to maritime domain awareness.
Luno B will be awarded in the calendar year of 2025. But this Project Aegir, you know, is getting at what the defense innovation unit is identifying as the CSO process—pre-vetted, quick, fast. And so I don’t think it’s an experiment. I think that, you know, it’s a demonstration of real intent.
CALABRESI: Let me ask, getting back again on sort of on the theme of values but also to impediments to speed but also retention of values.
This administration, especially National Security Council leadership, has been fairly focused on trying to put in place some frameworks and rules, even if interim, for handling AI broadly throughout the government but specifically on national security measures and also in the intelligence world most recently with—let me see what this thing is that Charlie Savage had in the Times. I don’t think I have it with me but it set out some new constraints on what could be acquired by—in the way of intelligence and by the government.
Do you—asking you to comment generally on whether that’s been helpful or harmful. Is it an impediment? Is it useful in setting values? Is there too much focus on policy and doing it right versus doing it fast?
What’s your reaction to this? I mean, just a lot of brain power and time. Just to throw out an example there slightly outside your domain but there was a good question at another Council session on the challenge of protecting the—or abiding by the protections for U.S. persons that the U.S. intelligence community has had to abide by since the introduction of the Foreign Intelligence Surveillance Act. When you’re sucking up huge amounts of data every day how do you do that? There must be equivalency in your world.
So is this helpful? Is it central to the value proposition of American strategic leadership or is it getting in the way of the speed that you need to innovate?
WHITWORTH: A lot of questions there. I’m going to go for the easy one first, which to me is intel oversight, and so as someone who has grown up through this community—I’m now in my thirty-sixth year—I am preprogrammed to submit to intel oversight principles.
We don’t spy on Americans, period, right? And so I’m very, very wedded to that. If there’s going to be a debate on that that’s probably going to be done at the policy level. I’m just going to be a practitioner and I’m going to ensure that we train our people accordingly.
So if there’s any sort of debate on that we’re going to fall in. But right now our people to the person, all 16,000, know exactly what their left-right limits are in this regard and what’s very interesting—we can talk about this later—is that there are times we are allowed to get commercial imagery of the United States but it’s when—we’ve got Americans in trouble. Like, Department of Homeland Security needs some help with a hurricane and we’re part of the search and rescue effort and that kind of thing.
CALABRESI: That’s interesting.
WHITWORTH: But otherwise we’re committed to our intelligence oversight training and those principles.
CALABRESI: Can I just mark that as we go through? There have been instances under your tenure where Homeland Security has come to you and said, you guys have the ability to take this massive data we just sucked up and find the missing person, you know—
WHITWORTH: It’s more acute than that. OK. It’s more acute than that. It’s not like we go into the coffers of all of our collect. Actually, it’s very, very specific. We have a specific need. Ian is an example. Helene is an example. Milton—
CALABRESI: Can I just ask who had to sign off on that for domestic?
WHITWORTH: It has to be an official request coming in from one of our agencies. In this case it was FEMA by way of the Department of Homeland Security.
CALABRESI: I see. And does ODNI—does the director of national intelligence have to sign off or is that just—
WHITWORTH: Actually, once—no. Once it comes in through an accepted process called an MA that’s all we need at that point and we have people who are—
CALABRESI: Doesn’t sound like a huge amount of protection from spying on American citizens, if you don’t mind my saying.
WHITWORTH: Well, I think that, again, all of our people know what they’re supposed to do. Commercial imagery is made available and—
CALABRESI: Oh, that’s right. That falls under the whole bundle.
WHITWORTH: Yeah. And so you also have other sources out there. Civil Air Patrol is flying around. But they need somebody who actually from a cartographic perspective can put it all together for those searchers and that’s where we find—instead of, like, looking, you know, at really detailed places on the earth that can be important if you’re looking for someone who’s missing.
But sometimes it’s just to help the searchers get organized, building grid reference systems so that they all have the same reference system when they’re saying, I’m going to this particular place, and actually generating waterproof maps and charts, right?
And so we have people with—who have—there are two trailers that we have prepositioned for this very mission with people who are trained to go. They have independent communications, they have diesel generators, they have printers, and they have—and they have their training and they generate maps and charts for a search.
Once the search is over and they have an idea as to where—you know, if everyone has been saved that is out there and it becomes more of a kind of a recovery or an assessment, maybe for insurance purposes, we’re out. That’s it.
MR. CALABRESI: Got it. I could just say, like, a slight chill running down my spine and imagining what the conspiratorial corners of the internet with what you just described. But it, obviously, makes a lot of sense.
WHITWORTH: I haven’t heard that. I haven’t heard that. I’ll be honest with you, and I’ve been talking publicly about this for some time because we’re really proud of it and I’ll tell you this. This involves an aspect of recruitment that I think is also very exciting.
When I talk to twenty-two-year-olds they might be interested in defense. They might be interested in breaking things, like, bad things and, you know, the—in sort of the targeting world but they’re almost universally interested in saving things.
And so the idea of saving Americans is very attractive to them. The same is true in understanding the Earth. You know, in that safety of navigation mission that we talked about we have a responsibility for setting the datum on which everyone, you know, uses as the basis of accuracy and precision to include in your cell phones, to include in our maps, in the military. It’s called WGS 84—World Geodetic System 1984. We set that and we maintain that.
CALABRESI: Is a good reminder that what we see in public facing social media and so forth is not necessarily representative of the general patriotism or a set of values of Americans more broadly.
A good opportunity to switch now that we’re at 2:32 questions from you all. I’d like to take questions and remind everybody that this meeting is on the record and, please, a reminder—when you’re called on stand, say your name, identify yourself, and please make it a question, not a reminiscence. (Laughter.) I’ll interrupt. You’ll be embarrassed. I’ll be embarrassed.
So with that, any questions in the room?
Mr. Hormats?
Q: Well, thank you for—thank you. Bob—
CALABRESI: For those who don’t know you—yeah. Thank you. Yeah.
Q: Bob Hormats. Thank you for a very thoughtful presentation.
I’d like to ask you to address the issue that is becoming more and more, I guess, timely or discussed and that is the issue of the Russians and their nuclear capabilities. You’ve talked about intelligence. You’ve talked about operations. There was a third one that I’ve forgotten but in that spectrum. What does your utilization of AI tell you or how do you enable it to tell you the probability or the capability of a country, Russia in this case, to utilize nuclear weapons?
In other words, they have to be thinking through all the pros and cons and ramifications. Do you have a process internally that enables you to duplicate or at least try to duplicate the kind of things that Russia is thinking about?
And I say this because deterrence in large measure is a psychological phenomenon and they have to conclude that that this would be a bad thing for them because such and such could happen on the other side that would make it a bad thing for them.
So it has to be credible and it has to be capable. How do you use AI to give you that kind of information so that you can figure out what they’re thinking and determine how far you want to go to ensure deterrence?
CALABRESI: Can I just—and I’ll just add on to that.
So, one, have you seen any change in nuclear posture, especially in recent days, as there’s been back and forth with Putin signing the new guidance on nuclears to that point, and with regard to his deterrence question is some of what you collect on what Russia has and is doing then available for being made public to let them know that we know what they’re doing?
WHITWORTH: Understand. So you’re addressing the warning problem. We’ve been going a mile a minute. So we’ve talked about targeting. We’ve talked about safety of navigation. We didn’t do much in the way of warning. So let me—I’m going to back up a little bit and then I’ll move into the specific of the question.
So warning is, effectively, the big industrial piece of what we do at NGA. It involves looking back and looking forward. It involves an assessment based on a baseline so you establish a baseline of behaviors and equipment and then you look for change, and then you try to predict the change as well.
So warning is—as I say, that’s where we really, really invest most of our time and energy and I hope that is of some comfort to everyone. And there is no place where we invest more time in warning than in assessing any chance or indication of some sort of vertical escalation that could involve nuclear weapons. OK.
Every single day at 8:45 in the morning I sit down with our team and we have a good old-fashioned discussion. It is effectively a gentle Socratic discussion where I’m asking them questions and just seeing what they—and it’s a nice way to have a conversation with our analysts, especially people who’ve been awake all night, and we talk about the products that we’ve generated by typically 4:00 in the morning that then go up, you know, to some of the highest levels of government and we have a conversation.
We start talking about the next product. That meeting always begins with the same graphic on what we know and don’t know about the possibility of vertical escalation.
I have mentioned this in multiple press engagements as a way to, I hope, bring solace to any American listening who might be concerned. This is what we cut our teeth on. When you go back into our history, especially the days of NPIC—the National Photographic Interpretation Center—some very, very important intelligence victories, especially if you go back to the Cuban missile crisis where people really became interested in imagery and what imagery can tell you and can tell the president especially.
And so, you know, in the deterrence conundrum certainly you’ve got to have capability, will, and communications and I would put this kind of in the communications category, whether there are discussions or not about what we may see. I’ll leave that to policy makers and to people who actually execute deterrence policy.
But I know this. We are over communicating all the time what we know and don’t know relative to the chances of vertical escalation and this has been I think one of the mainstays of our work relative to Russia and Ukraine is that we’ve never lost track of that mission throughout the entirety of whether it was in the preinvasion to the—to where we are today.
With the Ukrainians the power of imagery is also very real. While I won’t necessarily discuss much detail I know this. My mandate and our mandate at NGA—these 16,000 people were given the mandate, like, ensure that the Ukrainians can defend themselves.
Now, one of the great things about some of the commercial discoveries and, I would say, constellations that have been put up in space is that it’s easy to share. It’s not necessarily national, exquisite stuff where you’re trying to keep that edge. It’s actually something you can share and that’s something that, you know, 400,000 people in a portal called GEGD—it’s Globally Enhanced GEOINT Dissemination—GEGD—including vetted Ukrainians have a way to get their information and their imagery through that and that’s through commercial imagery, and that itself is a form of transparency.
Now, this has been twisted. In the past I had somebody, I think, report in RT—of course, it was a loaded, you know, kind of twist where they said, well, that means then that you were informing the Ukrainians before Kursk.
I said, no, let’s be very clear. Kursk was a surprise. David Cohen said that himself, the deputy at CIA. But make no mistake, you know, in terms of transparency commercial has been very helpful to the Ukrainians in terms of what we know as a country about vertical escalation. We never stop, ever stop, making that a priority for warning.
CALABRESI: Fantastic. Thank you.
Chris?
Q: Thank you very much. Chris Isham with CT Group.
I wonder if you could talk a little bit. I know you’re—you don’t really want to talk about real world situation but maybe a hypothetical might be in order. Let’s assume a nonstate actor is targeting and firing missiles at commercial shipping in a navigable seaway and at U.S. naval vessels as well.
WHITWORTH: I wonder what you’re alluding to. (Laughter.)
Q: What kind of intelligence challenges, from your point of view, does that present? And they’re using mobile launchers that—they’re disguised as civilian vehicles.
WHITWORTH: Yeah. I think you’re adding that dimension of mobility. I’d be irresponsible to just gloss over that. No, mobiles are hard. I was in charge of relocatable targeting at U.S. CENTCOM for Iraqi Freedom, one of the hardest things I’ve ever done.
So anytime you’ve got something that involves wheels and mobility it’s a much harder, you know, situation from a collection, exploitation, and timeliness perspective.
So, yeah, I think that’s—I hope that addresses your question. You know, I think it’s probably obvious but at the same time I think that’s why, yes. It’s like, are mobiles that hard? Mobiles are still that hard.
CALABRESI: But I think it is public knowledge that Maven is being deployed in the Red Sea and in Yemen, yeah?
WHITWORTH: So, like I say, I keep it at the U.S. CENTCOM level. I don’t know if CENTCOM talks about what they employ but, you know, I’m very, very pleased with our relationship with U.S. CENTCOM with NGA Maven. Yeah.
CALABRESI: In the front.
Q: Hi. Tim Kubayrch from Harding Loevner.
I have a question about accountability in decision-making processes.
WHITWORTH: About, I’m sorry, the key word?
Q: Accountability.
WHITWORTH: Accountability, yeah.
Q: Do you hold decision-making processes that involve artificial intelligence to different standards as human actors? And how do you combat algorithm aversion by which entities within your organization might be averse to having artificial intelligence take more market share of the cumulative set of decision-making processes at your organization?
WHITWORTH: Yeah. So the key word here—I love this question. It’s exactly the right question.
The accountability is still with a human. Accountability is still with a commander and I’ve said this in a couple of different venues. NGA Maven—Maven Smart System that is a decision-making aid. It is an enabler to decision-making. It is not a decision-maker. Really, really important.
We are not at the point in our history of decision-making and the way of combat and combat preparation where we’re looking at unwitting enemies being hit by things that, you know, are decisions by machines. OK. We’re not—that is not—that’s certainly not what I’ve been instructed to do.
But we do owe it to humans who are making these decisions, either in preparation or in execution, to get them the best and most complete and the fastest solutions we can. So where I do see a responsibility is on the human side in certification and then on the model side in accreditation, and this kind of gets into—you wanted to go here so I’m going to go ahead and go here.
CALABRESI: Right. Yeah.
WHITWORTH: When we’re talking about what I have a responsibility and we at NGA have a responsibility to do it’s to set standards and enforce them. So early on it was clear, based on the executive order the president put out and also just on our own intuition, that we needed to certify our people on some of the principles that we hold dear if they’re going to touch that code.
If they’re going to touch that code they’re going to know about humanity, necessity, proportionality, a distinction. They’re going to—and they’re literally going to take a pledge at the end of their training that they’re going to adhere to those things. And so this is—for us it’s very simple. GEOINT responsible AI training, GREAT for short.
Everyone who touches the code goes through GREAT training and it’s actually really good, and we’ve now been establishing a way to provide that training to a lot of people in military commands.
More recently it became clear to me that models are becoming more plentiful and if it’s the visual domain and especially visual intelligence in a process, whether it be warning or whether it be, you know, targeting, I’m going to and want to ensure that we have a chance to evaluate so that if somebody wants to use this model they know the rigor it’s been through.
And so we have established a pilot that is called AGAIM. That’s the Accreditation of GEOINT AI Models—AGAIM—and it’s effectively founded on sort of the risk continuum that we set for collateral damage estimation. It’s to tell people what level of risk and evaluation might either have mitigated or hasn’t gotten there yet.
So, but instead of—like, in collateral damage typically you have one layer and then you didn’t pass that so you got to move up to the next one. In AGAIM, in the accreditation model, you pass one, you move to the next. You pass that one, you move to the next, and it’s about looking at your documentation, the certification of the people, your training regimen. When it’s really basic then it might move up to level two, which will be a little more about the training data, the data that you use in the process of building the model.
Level three will even have more scrutiny, looking at your outputs relative to the inputs, and then level four, you know, we probably won’t get to for a while but it’s at least a model where if somebody is going to say, I need this model for our purposes, whether it’s warning, targeting, or safety of navigation they have some idea about its quality.
OK. You get the picture. I’m—
CALABRESI: (Laughs.) Right here in front.
Q: Thank you. I’m Jonathan Panter. I’m the nuclear security fellow. I’m also a former U.S. Navy navigator so I can attest to how amazing the NGA charts are—how important they are.
WHITWORTH: Awesome. Thank you.
Q: So I’m wondering—you must be sitting on a gigantic pile of data. That data must be increasing exponentially.
WHITWORTH: Yes.
Q: And within five to ten years and certainly after that you will have the ability to perform some kind of time series analysis on the data, providing you an additional layer in the form of time in mapping discrete events.
And are there—do you have any plans to which you can speak about applying that time series analysis to make generative predictions using AI to provide not just real-time intelligence to the decision-maker but a model of what might happen in intelligence?
WHITWORTH: Yeah. Predictive. Yeah.
So I’ll answer that you’re exactly right in everything you just said and the applicability of our coffers—our intelligence coffers—that data should never go away.
Why? Because most of our eureka moments happen actually after we’ve got—like, well after the original collect and we’ve had some new hint—some asymmetric hint from maybe another sort of intelligence source that triggers us then to go back into our coffers and put the breadcrumbs together and eureka, we have a new discovery. And so you will not find many, if any, members of our team wanting to get rid of that data ever. Data loss is a bad thing for us.
Now, to what you’re talking about—and I addressed this recently in one of the conferences I spoke at—multimodal. So we’re very, very much into the visual domain right now but to really get powerful in terms of whether you’re correct and you’re moving towards an accuracy or at least some level of dependability, confidence, you’re going to want to add some other modes in there like text and maybe even some signatures.
And so, yes, we’re going to really concentrate on some large multimodal models—LMMs—for our future that I think will equip still a human decision-maker to make decisions based on sound predictions.
This is still untapped. It’s still new, but the rate things are moving right now I like our chances of moving into, you know, where it helps prediction.
CALABRESI: MASINT is currently under who?
WHITWORTH: The functional manager for MASINT is the DIA director Jeff Kruse—Lieutenant General Kruse.
Q: Thank you very much. A great, interesting talk.
January 10—this is going to be ten days before January 20 and there’s going to be a big change in government during that period of time.
Could you talk a little bit about—you have a dedicated, extremely highly expert group of people, some of whom are going to leave and then you’re going to bring in new people who will have to be trained. That sounds like a big ask of you.
Could you tell us about that transition, please?
WHITWORTH: And so we’re talking about in this particular election year as opposed to any other year or January 10 is—
Q: We were talking January 2025.
WHITWORTH: Right. So this is not prop. I carry this with me anytime because I love to give the oath to people. This is the oath that we take, every single person at NGA, and so the oath is to the Constitution. It’s going to be the same oath. If it’s somebody new coming on board we’re going to give him the same oath. If it’s the people we’ve had for twenty, thirty, forty years they still abide by the oath. It’s going to be the same oath on the 19th as it is on the 21st so, yeah.
Expertise—so it’s all about training. I’m really proud of a couple things. When it comes to our tradecraft we have a very good college. It’s called the NGA College, and so we have eleven certifications, and that’s high when you compare it to other disciplines in the intelligence community.
And so the premium for us this is a very specialized tradecraft and so we’ll be prepared. If we have a lot of people to train we’re going to be prepared for that. But that—I don’t find it daunting, to be quite honest, and right now, you know, we’ve—our people love what they do so much that I don’t—I’m not hearing any whispers about, you know, people leaving en masse. I’m just not. They’re very excited about what we’re doing right now and getting better at it.
CALABRESI: You know, somewhat relatedly and treading into woke waters, any non-White-male questions? Oh, there we go in the back.
Thank you.
Q: At least 50 percent of that. Hi. Chloe Demrovsky.
I’m on the FEMA National Advisory Council and as part of that work we went over to NASA to look at the work that they’re doing in geomapping, and I’m curious how you collaborate with them to help see if there are maybe any gaps or other ways that you can kind of fill in the picture that way.
Thank you.
WHITWORTH: We love NASA, and if you were to allow us about an hour we would go through a pretty rich history with the space program of the United States to include mapping of the moon for the original lunar series.
As it stands now, I think that that’s still—you know, right now, the interest is in NASA taking that mission for future lunar events. But, you know, we have responsibilities in space and this is one of the reasons we actually changed our motto.
Our motto used to really concentrate on the rock itself, on the Earth. It seemed like all we did was look down and address the Earth down, and we wanted to address two extremes, one being the seabed and the other being space.
And so we finished our motto. It’s now “know the world, show the way,” which speaks to that navigation piece from seabed to space. So we do look up. We don’t talk about it very much. OK. This is where it’s difficult sometimes to talk to Americans about how. OK. We just leave that one off. We don’t really discuss the how. But we do look up and we do make assessments in responsible behaviors in space as opposed to irresponsible behaviors in space.
So whether it’s the Space Force, whether it’s SPACECOM, or whether it’s NASA they all care with regard to what we’re distinguishing or the level of distinction that we make when we look up.
CALABRESI: Let me just tack on something to that.
You guys put out a report I think several years ago that assessed competition with our adversaries overseas on a number of different factors and people were surprised to find that, in fact, China, despite not necessarily being at pace with us on AI is nevertheless in your mission, you know, outperforming us in a number of areas. CSIS had a report.
Can you speak to how the U.S. is doing in this area writ large relative to Russia and China and other competitors?
WHITWORTH: Yeah. So it’s a net assessment question. I love net assessment. I think net assessment is incredibly important. Typically, though, it’s in—you know, in closed channels where we have these discussions. But I think a fair way to answer a net assessment question is like this.
We all have to be impressed by the growth of their coverage of surveillance and intelligence tradecraft.
CALABRESI: You’re talking China in particular?
WHITWORTH: Correct, because that was your question.
CALABRESI: Yeah, yeah. That’s right.
WHITWORTH: Right. However, what I think remains to be seen, you know, this country has—I’m not going to say perfected but constantly refined the process of rendering bad news quickly up our chain. OK. That’s what twenty years of a war on terror will do. It will really refine your—the pace at which you tell, we tell, our national command authority that, hey, there is something very serious and you need to know about it right now.
I think that the jury’s still out there for China, and so I would offer that that’s a really important gap in any sort of net assessment is that I know for a fact that everyone in uniform, everyone in the defense industries, everyone who does this national security work, we’re dedicated to moving bad news quickly up.
CALABRESI: Interesting. OK. Right here in front.
Q: Hi. I’m Paul Creedon, Incentrum Group.
I have a question about some of the issues people have raised about people, training, vast amounts of data. It seems like software can be a solution for that. I’m curious if you can, given them broad strokes, though, from a capital allocation of your budget how much is going to people and training versus software and services that bring software.
WHITWORTH: Right.
Q: Because the history of Department of Defense is to acquire more hardware where they package the software in a platform and you have an agency that is doing the PED—you know, process exploitation dissemination that is totally analytics driven, not product driven. So you’re on the forefront of maybe acquiring pure software. I’m curious how you’re thinking about this and what you’d want that software-to-people ratio to get to in the future.
WHITWORTH: Yeah. So I wouldn’t be able to quantify it. I’ll just say I think the best way to answer your question is from an AI lens and a conundrum that we have relative to—you know, AI is great. One day it may actually do the work of a lot of people but it sure does take a lot of people to get there, and that’s not a joke. It’s for real and it gets to this issue of data labeling. It gets to teaching the machine. In the machine learning process you have to have people do that.
I had a question early on a year and a half ago—should we consider incentivizing the early retirement of people who aren’t necessarily IT savvy or comfortable with AI, and that was about the fastest meeting I’ve ever had.
I said people who’ve been doing this for twenty, thirty, forty years are exactly the people who are going to be part of modernization that we teach these algorithms and these models what they’re looking at and to get into nuance and to get into the things that are so incredibly difficult about this business.
So the issue we have here, and this is not to lobby for additional resources. I don’t do that and I’m actually—I would tell you from an efficiency perspective this agency is doing extremely well with what they are provided.
But our people go to work—our average analyst will go to work with a stack of requirements and they have to get through that during the course of the day and they have to characterize what they see and those things we called structured observations, and that’s its own process and they’ve been trained to do that.
And because they typically look at the same places on the Earth they know immediately if they’ve got something anomalous and in some cases they’re not even typing. They’re picking up the phone, and that’s very powerful.
It’s very difficult for me to come in and say, hey, by the way, I need you to spend 15 percent of your time also doing some data labeling for us. That’s not fair. At that point, they’re going to come back and say, I have these requirements. These are warning-related things. I need to do this. What falls off?
So what do we do? We pay. We pay for data labeling and we made some news on this here recently. You may have seen this. Seven hundred million dollar request for a proposal known as Sequoia, the largest data labeling RFP that DOD has ever put out.
Now, part of my vision, and getting to your question, I’m not going to give you an exact ratio. It’s not that mathematical. But my vision is one day we won’t have to pay necessarily for data labeling. It will be part of the natural tradecraft.
But I don’t—in this particular moment, as busy as our people are, I would be irresponsible to tell you when that will come because I have to respect how committed they are to that queue and their structured observations, while at the same time we’re developing these models and making them better.
CALABRESI: Hundred percent endorse the idea that people at the senior end of their career are very, very valuable in the face of AI. (Laughter.)
In the back.
WHITWORTH: (Laughs.) I’m right there.
Q: Ken Kraetzer, CaMMVetsMedia.
Admiral, thank you for your service to our country, first off. And we cover West Point considerably. A lot of the cadets we speak with say they’re studying geospatial. They have programs. They send twenty-five cadets into the cyber branch of the U.S. Army.
I was just curious the relationship of someone in the military working on cyber and somebody who might be working on geospatial. Are they parallel or do they come together?
WHITWORTH: For data science a lot of similarities. So data for us, when we’re talking about multimodal especially we certainly are experts in the visual data and making sense of that, building understanding, but we’re always looking for additional sources of data and applying data science to find trends.
OK. And so I am ready to talk to anybody out there, whether they’re military or not who actually wants to help our national security challenge through data science if they’ve been properly trained in data science.
There are some things, though, that whether they come up through some sort of military training or purely academic training there’s some things that I would like to sensitize you to some areas where we really need some help in the recruitment category and that’s in geodesy and the study of the Earth.
So we talked about WGS 84 and the and the essence of precision. Well, that also entails knowing everything about the core and in some cases the crust and the change in the spin rate of the Earth, the changes in gravity everywhere on the Earth, the changes in the poles and, yes, the changes in the water.
We have to understand mean sea level and we literally determine mean sea level because the essence of accuracy for a chart and for an aim point and for anything where you need precision is the Z axis. X and Y, lat/long, everybody can do that. But it’s going to be wrong if you don’t have the elevation correct.
Well, you have to use some basis to start your elevation—mean sea level. So we are very into hydrography, bathymetry, and understanding the changes and encroachment in some cases of water—changes in the poles, changes in ice.
So I would implore you if you know young people that want to help a great team and help this nation and they’re interested in geodesy we’ve got a place for them, and so this is a very important part of kind of the ecosystem between government and what we do and academia and business, for that matter.
So I hope—I went on a little bit of a tangent to your question but I do want to emphasize that this is an area that we have to improve our recruitment in some of the earth sciences especially geodesy.
CALABRESI: Fascinating. We are at time and have a hard stop for you and by Council rules. So thank you all for attending. Thank you so much, Vice Admiral. It’s been fascinating.
WHITWORTH: I’m honored. Thank you. (Applause.)
(END)
This is an uncorrected transcript.