I found myself in a less-than-ideal circumstance: I was a battle staff director in an exercise facing an immediate problem with significant ramifications. The issue at hand was one in which I had only marginal expertise, and to make matters worse, the words and terminology being used by those involved were not quite right. I knew just enough to understand what was wrong, but not enough to decisively solve it.
I could not pass the buck—this was my team’s problem to solve.
There was no time to make myself an expert on the issue, and while the experts in the room had more knowledge than I did, they too lacked the specific expertise for this particular challenge. To compound matters further, I was juggling a handful of other critical issues that would stall if this new problem consumed all of my team’s focus.
So, we turned to a large language model for support. We fed it the problem, and in return, we received excerpts from relevant publications, recommended courses of action, historical examples, and enough data to spur my experts’ own thought processes.
In just fifteen minutes, we went from a problem that no one in the room knew how to solve to crafting an executable plan to resolve it.
A Tool, Not a Panacea
I am not a blind disciple of artificial intelligence (AI), nor do I believe it is a panacea for all the US Air Force’s challenges. I am keenly aware of its limitations and the pitfalls of over-dependence. Errors in design, false positives, and incorrect inputs can lead to poor decisions based on limited data. However, if we accept such risks as a reason to disregard AI entirely, we deny ourselves the ability to exploit its profound advantages.
Beyond being merely additive, human-machine teaming may very well become a necessity. Although early 2026 recruiting projections are showing promise, this does not automatically translate to increased manning for a particular unit or specialty, especially as the Air Force’s authorized end strength remains relative stagnant. Air Force staffs and Air Operations Centers (AOCs) will continue to navigate challenges from previous drawdowns, recruitment shortfalls, and retention levels for the foreseeable future. While not a cure-all, large language models can help relieve some of this strain by replicating the research, analysis, and product generation of a much larger staff, allowing human Airmen to serve in director and quality-control roles.
From Operational Assistant to Personal Advisor
AI’s utility extends beyond alleviating manpower shortages. Consider the endless fight with formatting. Provided with relevant inputs, AI can automatically present reports and information tailored to a specific general’s preferred format for consuming information, saving countless hours of purely administrative work and allowing staff to focus on the substance of the information, not the cosmetics of its presentation.
Likewise, operational planning teams can greatly benefit from AI-assisted intelligence collection, accelerated course of action (COA) development, and AI-informed wargaming. With physics-based AI models, commanders could even witness COAs play out in front of them, deepening their understanding of the associated risks. AI can also rapidly generate responses to requests for information (RFIs) that arise during battle rhythm events, returning precious time and decision-space to the commander. Through this assistance, a frustrated senior officer may never again have to utter the phrase, “don’t bring me a problem without a solution.”
On a personal level, despite alarmist headlines (albeit with some valid concerns) that AI may dilute one’s creativity and critical thinking capacity, AI can become a powerful tool for enhancing one’s skillsets. An AI can act as a personal advisor, curating a curriculum specifically designed to make one better faster in the role he or she is assigned. By identifying and suggesting exactly what one needs to study to gain expertise, it enables an Airman to gain the equivalent of years of experience with a focused investment of time, fundamentally changing how we approach development.
The Human in the Loop: Keys to Success
Harnessing the power of AI is not a passive endeavor; it is a skill in itself. Three key skills are needed for an officer to effectively leverage AI in staff and operational C2 functions:
- Ask the right question, the right way. The quality of the AI’s output is directly tied to the quality of the prompt. A well-crafted question that provides proper context and clear intent is the first and most critical step.
- Recognize when a response is wrong or “off.” The AI is a tool, not an infallible oracle. An effective user must have enough foundational knowledge to spot inaccuracies, illogical conclusions, or subtle biases in the AI’s response and must have a general understanding of the source material used to inform these responses. This is the human quality control that prevents bad data from becoming a bad decision.
- Possess the discipline and self-awareness to know when one does not have the background, experience, or skillsets to do either of the above effectively. Perhaps the most important skill is recognizing the limits of one’s own knowledge. Having the humility to admit when one does not have the background to properly frame a question or validate a response is crucial. This self-awareness prevents overconfidence in the technology and ensures critical thinking remains at the heart of the process.
Conclusion
The integration of artificial intelligence into daily staff and operational C2 functions is no longer a distant concept; it is a present-day reality with tangible benefits. From rapidly solving complex operational problems to personalizing training and development to streamlining administrative tasks, AI stands ready to augment Airmen in unprecedented ways. It can function as a tireless researcher, a personalized mentor, and an administrative assistant, freeing up human warfighters to focus on the critical thinking, leadership, and decision-making that only they can provide.
However, the effectiveness of this powerful tool hinges entirely on the user. The ability to craft precise questions, the wisdom to critically evaluate the answers, and the self-awareness to recognize one’s own limitations are the quintessential skills of the modern officer. Without them, AI is merely a sophisticated toy; with them, it is a revolutionary force multiplier.
Finally, in a fitting demonstration of the very human-machine teaming this article espouses, it is worth noting this conclusion itself was drafted by an AI while Tater was off trying to solve other problems. It serves as a final, practical example of how you can leverage these tools not to replace your own thoughts, but to refine them, articulate them, and bring them to a sharper, more effective close.
Lt Col Joshua “Tater” Williams, USAF, is a Senior Air Battle Manager and has served in a variety of Air Force operational, instructional, and staff roles. He is a graduate of the United States Air Force Weapons School, the United States Marine Corps School of Advanced Warfighting, and the United States Marine Corps Command and Staff College.
The views expressed are those of the author and do not reflect the official policy or position of the U.S. Air Force, Department of Defense, or U.S. government.
Photo by Tara Winstead on Pexels


Leave a Reply