← Back to Insights

March 22, 2026 · 9 min read

AI in Aerospace & Defense: Why Contractors Are Spending Decades on Systems AI Could Deliver in Months

Aerospace and defense companies generate more engineering data per program than any other industry—and still manage requirements, testing, and sustainment with processes designed in the Cold War. AI-native delivery can compress program timelines from decades to years while improving mission readiness.

A $900 billion industry still running on waterfall processes and paper-based approvals

Global aerospace and defense spending exceeded $900 billion in 2025. The industry designs and builds the most complex systems humanity has ever created—fifth-generation fighters, nuclear submarines, satellite constellations, hypersonic weapons, and autonomous platforms that operate in contested environments where failure means loss of life and national security consequences. The engineering rigor required is extraordinary, and nobody disputes that.

What is disputable is whether that rigor requires the timelines the industry has normalized. The average major defense acquisition program takes 8-12 years from Milestone B to initial operational capability. The F-35 program has been in development for over two decades. The James Webb Space Telescope took 25 years from conception to deployment. Cost overruns on major defense programs averaged 28% in FY2025, according to GAO's annual weapon systems assessment. These are not signs of engineering discipline. They are symptoms of delivery model failure.

Traditional consulting firms and defense-focused integrators approach aerospace AI with the same delivery model they have used for decades: multi-year digital engineering transformation roadmaps, 50-person teams mapping model-based systems engineering architectures, and compliance frameworks that treat every AI application as if it were controlling a weapons system in flight. In an industry where adversary capabilities advance annually and program delays measured in years translate directly to capability gaps, an 18-month consulting engagement to deploy a single AI tool for requirements analysis is not systems engineering prudence. It is institutional inertia funded by cost-plus contracts.

Three use cases where aerospace and defense firms are burning billions on manual processes

Requirements management and traceability is the highest-ROI starting point for most defense programs. A major weapons system program manages 50,000-200,000 individual requirements across system, subsystem, and component levels. Each requirement must be traced to a parent specification, validated against test procedures, and verified through formal review. Current requirements management is overwhelmingly manual—systems engineers spend 40-60% of their time maintaining traceability matrices in DOORS, Jama, or spreadsheets rather than doing actual engineering analysis. AI-powered requirements analysis can automatically classify requirements by type and quality, detect conflicts and ambiguities across specification documents, maintain traceability links as designs evolve, and flag untestable requirements before they propagate downstream. Programs deploying AI requirements tools report 50-70% reduction in requirements review cycle time and 30-40% improvement in defect detection during specification reviews. For a program spending $200 million annually on systems engineering, freeing 40% of that capacity for actual analysis rather than bookkeeping is transformative.

Test planning and defect prediction is the second critical use case. Aerospace test programs are staggeringly expensive—flight test for a major aircraft program costs $50-100 million per year, with each test sortie costing $100,000-$500,000. Test matrices are built conservatively because missed defects in aerospace are catastrophic. AI-powered test optimization can analyze historical test data across programs, identify test cases with the highest probability of revealing defects, predict which subsystem interfaces are most likely to fail during integration, and optimize test sequencing to find critical issues earlier when fixes are cheaper. Defense programs using AI test optimization report 20-35% reduction in test events required to achieve equivalent coverage—savings measured in hundreds of millions on major programs. The models are proven in commercial aerospace. The barrier in defense is deploying them through the accreditation processes that defense integrators turn into multi-year programs.

Predictive sustainment and readiness optimization is the third use case with immediate fiscal and operational impact. The Department of Defense spends over $100 billion annually on operations and sustainment—maintaining, repairing, and supplying fielded weapon systems. Aircraft mission-capable rates have been declining for a decade, with some fleets below 60% availability. The primary driver is reactive maintenance and inefficient supply chain management. AI-powered predictive sustainment that analyzes maintenance records, flight data, environmental exposure, and supply chain signals can predict component failures 30-90 days before they ground aircraft, optimize spare parts positioning across global supply chains, and schedule maintenance during planned downtime windows. The Air Force's Condition-Based Maintenance Plus initiative demonstrated 25-35% improvement in mission-capable rates on pilot platforms. Yet scaling these proven capabilities across the fleet takes years under traditional program management approaches when it should take months.

Why CMMC, ITAR, and classified requirements are design constraints, not decade-long blockers

Every defense program manager offers the same explanation for slow AI adoption: security and compliance. CMMC Level 2 and 3 requirements for controlled unclassified information, ITAR restrictions on defense articles and technical data, classified system accreditation under ICD 503 and JSIG, and NIST 800-171 controls create genuine security obligations. These requirements are real, consequential, and exist to protect national security. Nobody disputes their importance.

What does not hold up is the conclusion that these requirements necessitate multi-year deployment timelines for every AI application. The vast majority of AI use cases in aerospace—requirements analysis, test optimization, predictive maintenance, supply chain analytics—operate on engineering data that is CUI or unclassified. They do not touch weapons system software, classified intelligence, or control systems that affect platform safety. A requirements analysis AI that reads DOORS exports and identifies specification conflicts operates in the same data environment as the engineer doing the work manually. The security boundary does not change because AI is performing the analysis instead of a human.

An AI-native approach builds security compliance into the architecture from day one. CMMC controls are structural elements of the deployment—encryption at rest and in transit, multi-factor authentication, access logging, and data handling procedures that satisfy DFARS 252.204-7012 requirements. ITAR compliance is a data classification constraint: the system processes only data it is authorized to handle, with access restricted to U.S. persons as required. These are engineering problems with well-defined solutions, not governance programs requiring years of framework development. Defense integrators that spend 12 months building a security compliance architecture for a DOORS data analytics tool are not protecting national security. They are padding a cost-plus contract.

The adversary timeline makes slow delivery a national security risk

China's defense AI investment has grown at 20-25% annually since 2020. The PLA has deployed AI-powered autonomous systems, predictive maintenance platforms, and intelligence analysis tools at a pace that U.S. programs cannot match under current acquisition timelines. The 2025 National Defense Strategy explicitly identified the speed of technology adoption as a critical competitive factor, noting that 'the Department cannot afford acquisition timelines measured in decades when adversary capabilities advance in years.'

Every month a predictive sustainment AI sits in a pilot program instead of fleet-wide production is a month where aircraft mission-capable rates remain at 55-65% instead of the 80%+ rates the models demonstrate. Every quarter a test optimization AI is stuck in accreditation review is a quarter where test programs burn through $50 million in test events that could have been eliminated. Every year a requirements analysis AI takes to reach production is a year where systems engineers spend half their time on traceability bookkeeping instead of identifying integration risks that cause $500 million redesign cycles downstream.

The national security argument for rapid AI deployment in defense is not speculative. It is a direct response to a documented and accelerating capability gap. The question is not whether defense AI should be deployed carefully—of course it should. The question is whether 'carefully' requires the timelines the current delivery model produces, or whether those timelines are artifacts of an acquisition culture designed for Cold War programs and a consulting industry incentivized to maximize engagement duration. The answer, for non-safety-critical engineering tools, is unambiguously the latter.

What AI-native delivery looks like for a defense program

Week one: identify the highest-impact engineering bottleneck—usually requirements management, test planning, or technical data analysis. Audit available data in the program's engineering tools (DOORS, Jama, Windchill, Teamcenter) and establish the CUI/ITAR data handling architecture. Build a working prototype using actual program data in an authorized environment. By end of week one, systems engineers are seeing AI-generated requirements analysis, conflict detection, or traceability gap identification on their real specification documents—not a demo with synthetic data, but actual program artifacts producing actionable engineering insights.

Week two: integrate AI outputs into the existing engineering workflow so systems engineers see recommendations in their normal tools rather than a separate dashboard. For requirements analysis, surface ambiguity flags and traceability gaps directly in the requirements management tool. For test optimization, present recommended test prioritization alongside the existing test planning infrastructure. Iterate based on engineer feedback—experienced systems engineers know which requirements are intentionally vague for flexibility, which interfaces have historically been problematic, and which test conditions the data cannot fully capture. Their domain expertise calibrates the model in ways that historical data alone cannot.

Weeks three through six: expand to additional subsystems, integrated product teams, or program phases. Establish monitoring for analysis accuracy, engineer adoption, and engineering cycle time impact. Produce accreditation documentation against the already-built and tested system—the system security plan describes what exists, not what might be built. Train engineering teams on the new workflow. By week six, the program has a production AI tool reducing engineering cycle time with measurable impact on schedule and cost performance.

The critical difference from traditional defense consulting: systems engineers interact with a working tool in week two, not after an 18-month digital engineering transformation. Defense engineers are among the most demanding users of any analytical tool because the consequences of errors are severe. Their trust is earned through validated accuracy on real program data, one correct analysis at a time. An engineer who sees the AI correctly identify a requirements conflict they missed gains confidence immediately. Trust in defense AI is built in the engineering workspace, not in vendor briefings.

Model-Based Systems Engineering does not require AI to wait

The defense industry has spent a decade pursuing Model-Based Systems Engineering as the future of program execution. MBSE promises to replace document-centric engineering with integrated digital models that maintain consistency across requirements, architecture, design, and verification. The vision is sound. The execution has been glacial—fewer than 20% of major defense programs have achieved meaningful MBSE adoption, according to a 2025 NDIA survey.

Traditional consulting firms have exploited this gap by positioning AI deployment as dependent on MBSE maturity. 'You need to complete your digital engineering transformation before AI can add value' is a statement that justifies multi-year MBSE consulting engagements before AI even enters the conversation. It is also false. AI does not require a complete digital thread to deliver value. An AI requirements analysis tool that reads DOORS exports and PDF specifications—the documents that actually exist on programs today—delivers immediate value without waiting for MBSE infrastructure that may take years to mature.

The pragmatic approach is parallel deployment: use AI on the data that exists now while building toward MBSE maturity over time. As programs digitize their engineering artifacts, the AI tools become more powerful because they operate on richer data. But the value starts immediately, on today's documents, in today's workflows. Waiting for MBSE perfection before deploying AI is like waiting to build a highway before using a car. The car works on the existing road. It works better on the highway. But refusing to drive until the highway is complete is not a transportation strategy.

The programs that deploy AI in 2026 will define defense capability in 2035

Aerospace and defense programs operate on generational timescales. Systems fielded this decade will remain in service for 30-40 years. The engineering decisions made today—how requirements are analyzed, how tests are planned, how sustainment is managed—determine capability and cost for decades. Programs that deploy AI-powered engineering tools in 2026 will accumulate years of calibrated models, validated workflows, and institutional expertise that late adopters cannot replicate by purchasing the same technology.

The sustainment advantage is especially powerful. A predictive maintenance model that has been learning from three years of fleet operational data across thousands of aircraft detects failure patterns with a precision that a new deployment cannot match. An AI-powered supply chain optimization system that has observed three years of demand patterns, lead time variability, and depot throughput cycles positions spare parts with accuracy that saves billions in inventory carrying costs while improving readiness. These advantages compound—and they compound on timescales measured in decades because defense platforms remain in service for decades.

The question for every defense program executive and acquisition professional is direct: can your delivery partner get a production AI tool into the hands of your systems engineers, test planners, or sustainment teams in six weeks? If the answer involves 18-month digital engineering transformation roadmaps, 30-person consulting teams, and MBSE maturity prerequisites, you are paying for a delivery model optimized for the consulting partner's revenue model, not your program's schedule performance. The engineering data exists. The models are proven. The adversary is not waiting. The only variable is how fast you ship.