AI Target Selection with Human Oversight

Poor Man’s Palantir: The Beginning Phase 1

(To be clear, neither I nor my colleagues have had, nor have, any association with Palantir. We have never used their product, nor do we know of anyone who has. We find what they presume to do to be interesting and important, but there is no ideological inspiration or any connection to our work at all. The title phrase was one attached to our evolving work by an observer.)

The Beginning

It began in late 2022, when I started tinkering with artificial intelligence tools and exploring the potential of large language models (LLMs). I realized almost immediately that this technology represented a profound revolution, grounded in two central ideas:

  1. Mastery of LLMs requires mastery of language.
    One must understand grammar, nuance, and communication – and possess the awareness to recognize where bias may arise.
  2. The definition of history and reality will increasingly depend on how AI chooses to present and define truth.

The second idea remains deeply unsettling – one that will likely stay with me for the rest of my life. The first, however, was genuinely inspiring.

I have always been drawn to the study of history, war, law, business, and games. My curiosity has been sustained by a hunger for information and understanding – both of which exist only through language and communication.

For decades, information systems and databases have accelerated access to knowledge, though always within rigid, predefined structures. AI, by contrast, appeared to open an entirely new frontier. A traditional database is edited, constrained, and ultimately mechanical: its content, search logic, and results all shaped by human design. An LLM, however, can use language as humans do – removing much of that imposed structure. More importantly, it learns from each individualโ€™s interaction, adapting to the way that person employs language while still recognizing shared intent.

In early 2023, having only recently begun exploring AI tools, I decided to experiment more deliberately. My initial question was simple: could AI identify a set of critical infrastructure targets that would support a successful military campaign?

Although the concept seemed straightforward, the challenge was intellectually complex. Many human professionals fail repeatedly at such analysis, yet I hoped that by guiding AI through the process, I might observe it becoming progressively more capable – learning to evaluate the completeness, accuracy, and value of the information it encountered.

At first, there was no formal structure. The project began informally, as most of my explorations do. Only later did I reconstruct the path I had followed. If I were to begin again today, my approach would differ somewhat – partly because of what I learned, but even more because AI itself has evolved with astonishing speed.


Phase 1: Mapping Critical Infrastructure

Step 1 โ€“ Identification

The first task was to have the AI identify elements of national critical infrastructure by function, location, and criticality. Its initial interpretation of these terms required refinement. That process resembled the iterative narrowing of a complex Google search – until the first true moment of discovery.

Once given the key terms anchored by contextual references such as PDD-63, the AI became focused almost instantly and remained so for the remainder of the project, which extended over several months. Its thoroughness was impressive: it consistently mapped details to broader parent categories. Yet gaps in essential detail remained.

When I explained what constituted a high-quality result and outlined an investigative method for exploring deeper, the AI began uncovering much of what it had previously overlooked. It reminded me of a bright but inexperienced student – quick to find a general answer, but capable of remarkable insight once directed toward primary sources.

Lesson: An LLM functions as an intelligent research assistant, but one should always assume it has not yet gone deep enough. The user must drive the analysis, guiding it toward specificity and context.

Today: Depending on phrasing, current models may decline to answer for safety or policy reasons. Yet when they do respond, their analyses are broader, more detailed, and often include links to related or supplementary material.


Step 2 โ€“ Function and Consequence

With a foundation in place, I next tested whether AI could understand how a given infrastructure functioned and estimate the consequences of losing a particular node.

Initially, the AI concentrated on the science and technology underpinning each system, describing in detail how individual components operated. In that domain it performed exceptionally well. What it lacked was situational awareness – the ability to discern what actually occurred at specific sites.

To address this, I prompted it to search for contextual indicators: evidence of nearby activities that might reveal each siteโ€™s operational role. That adjustment produced far more accurate results and a clearer picture of which nodes were truly critical.

Lesson: At the time, AI did not intuitively link technical knowledge with contextual evidence. Once guided to do so, however, it made those connections rapidly and accurately.

Today: As with the previous step, depth of insight still depends on the framing of the question. Modern models, though, demonstrate greater capacity to integrate social, economic, and technical data in support of more specialized reasoning.


Summary of Phase 1

This initial phase – focused on selected aspects of national critical infrastructure – lasted roughly three months of part-time experimentation. It was both intellectually demanding and energizing.

What struck me most was the naturalness of the interaction. The LLM retained contextual memory throughout the dialogue, so as new information was gathered and assessed, it remained situated within the ongoing analytical frame. I only began new sessions when shifting to an entirely different infrastructure category – the value being able to see a consistent learning approach; but I wish I would have given it the chance to apply previous approaches to analysis to different technology.

That ability to maintain context is immensely valuable – but also potentially risky, since continuity can introduce bias. Yet it was precisely that continuity that made deeper analysis possible, in contrast to traditional search engines that rely on disconnected hyperlinks.

There is genuine risk in allowing someone without broad subject-matter knowledge to rely uncritically on AI-generated findings. Paradoxically, however, AI also provides tools to mitigate that very risk – creating a tension between trust and oversight that defines the new analytical frontier.

Structurally, the AI approached every infrastructure problem in a similar manner – likely reflecting the way I framed my questions. Its grasp of underlying science and engineering was impressive. But in assessing criticality, it required sustained guidance to reach comprehensive and reliable conclusions.

(To Be Continued)