Matthew Kling is VP and General Manager of AI Systems at MatrixSpace, leading development of the AiCloud and AiEdge platforms. He previously worked on advanced defense systems at RTX and led drone detection research at Northeastern University.

1. The assumption behind most autonomous systems is that connectivity is available — to a cloud, to a command node, to other vehicles in a network. When that assumption breaks down in a contested RF environment, what actually changes about how your edge AI has to behave?
Answer: When a system loses connectivity in a contested RF environment, the biggest change is that the edge can no longer depend on the cloud, a command node, or another platform to do the important work. That means the edge AI has to keep operating on its own, in real time, with no drop in awareness.
At MatrixSpace, we built our hardware and software architecture around that reality from the start. In the past, a lot of edge AI efforts tried to solve the problem by dropping a large server into the system, without really addressing scalability, cost, power, portability, or deployment flexibility. We took a different approach. We started with the belief that real capability has to live at the edge, while still allowing centralized, remote, and cloud software to add value when connectivity is available.
For us, that means everything has to run on small form factor, low power, and cost-effective computing devices. We have invested heavily in optimizing our software around that model. The result is real-time radar signal and data processing that runs on a very small CPU while handling gigabits of data per second, along with edge-based AI through MatrixSpace AiEdge for real-time target classification, track fusion, and multi-sensor fusion running on an ultra-small, low-power edge GPU.
That level of performance does not happen by accident. It requires deep development of our own AI models, along with datasets, training, and validation built from the ground up.
We then use MatrixSpace AiCloud to extend those edge capabilities when connectivity is available. AiCloud adds value through multi-site deployments, cross-site fusion processing, threat models, deep historical data storage and analytics, and future use of technologies such as agentic AI and generative AI.
The key point is that our architecture is designed so that when connectivity is lost, awareness is not. A local operator can still detect, track, classify, and fuse targets in real time and maintain awareness of the surrounding airspace, even without a connection to centralized command and control. The cloud is there to extend and scale those capabilities, not to make the edge dependent on it.
2. MatrixSpace has moved from detection to classification to tracking — increasingly pushing that intelligence to the sensor itself. Where is the hard ceiling right now on what edge compute can actually carry, and how do you design around it?
Answer: There is definitely a ceiling when it comes to processing and AI at the edge, and a big part of the challenge is knowing what truly has to live there versus what can be handled elsewhere. Our view is that the edge should carry the time-critical workloads that directly affect awareness and response, while the cloud or centralized systems should extend the mission, not enable the basics.
For us, that means the sensor has to do much more than just collect raw data. It has to process that data locally, classify what it is seeing, maintain tracks, and contribute to fusion in real time. That approach reduces dependence on high-bandwidth backhaul and avoids shipping large amounts of raw data across constrained links just to get useful outcomes.
The hard limit is usually not one single thing. It is a combination of compute, power, thermal constraints, size, cost, and the realities of the communications environment. Designing around that means being very deliberate about where each function lives, how efficiently software is written, and how much value you can extract from the data before anything ever leaves the sensor.
It also means accepting that edge architecture cannot be an afterthought. You have to build the system from the beginning around the idea that real-time decisions will often need to happen locally, under constrained conditions, on very small compute platforms. That is where the engineering discipline comes in, and that is where we have spent a lot of our effort.
3. Your USAF Phase II work involves frequency-agile sensing across radar, passive RF, and communications on small autonomous vehicles. Without getting into program specifics, what does that kind of multi-modal sensing architecture demand from the software stack that single-mode systems don’t?
Answer: AI sensing is really about fusing multiple sensing modalities at the edge in real time. Instead of relying on a single input, the system combines radar, passive RF, optical, and other sensors to build a more complete and reliable understanding of what is actually happening in the environment.
What this demands from the software stack is fundamentally different from a single-mode system. You are no longer just processing one stream of data, instead you have to manage multiple data types, align them in time, fuse them into a coherent picture, and make decisions quickly enough to be useful at the edge. That requires tight integration between signal processing, AI models, and real-time system control.
It also requires a high degree of flexibility, and in our case, the system is not fixed to one sensing mode. It can actively sense using radar, passively sense RF signals, and even shift into communications modes. That agility is enabled through software-defined radios, which allow a single platform to take on multiple roles without adding more hardware.
The benefit is significant by reducing size, weight, and power, which is critical for small autonomous systems and for reducing operator burden. At the same time, you gain the ability to operate across sensing and communications in a more adaptive way, including supporting high-bandwidth, low probability of detect networks.
The key difference is that the software stack has to orchestrate multiple sensing and communication functions seamlessly, in real time, on constrained hardware. That is a much more complex problem than a single-mode system, but it is what enables a truly flexible and resilient capability.
4. The drone threat environment has changed materially in the last 18 months — faster platforms, electronic countermeasures, coordinated swarms. How has that changed what you’re asked to detect, and how has it changed your development priorities?
Answer: The short answer is that it has not changed our core approach, but it has reinforced why we built it the way we did.
From the start, we set out to design a new type of radar and sensing system specifically for the low-altitude airspace, from ground level up to several thousand feet, and to operate in complex, high-clutter environments. That meant being able to detect, track, and identify everything from very slow, low-signature targets to fast-moving objects traveling hundreds of miles per hour.
What has changed over the last 18 months is adversary behavior. We are seeing faster platforms, more coordinated operations, and deliberate attempts to reduce RF visibility, including fiber-optic control links that remove traditional RF signatures completely. In those scenarios, you cannot rely on RF detection solutions alone, instead you need a sensing layer that the adversary cannot easily hide from and that is where radar becomes essential.
In many ways, those trends have made our technology even more relevant than when we began, and when adversaries add payloads or modify their drones to evade detection from one type of sensor that often makes them more visible to another, particularly radar.
Where this has influenced us is in how we prioritize integration and system-level capability. We started as a radar-first company, but we have expanded into a full multi-sensor architecture that brings together radar, RF, optical, and AI-driven fusion.
The focus now is on delivering a complete sensing and response stack that can operate across a wider range of threat types and operational conditions.
While the threat has evolved, our core foundation has held steady and been reinforced. The shift has been less about changing direction and more about accelerating how we integrate and deploy a broader, radar-first, multi-sensor solution for low-altitude security.
5. L3Harris came in as a strategic investor at your Series B. For a company building at the sensor and edge layer, what does a defense-industry partnership like that actually unlock that capital alone doesn’t?
Answer: L3Harris brings a lot more than capital. As a major defense prime, they have access to contract vehicles, programs of record, and customer relationships that are very difficult for a smaller company to reach on its own. That access alone can significantly accelerate how and where our technology gets deployed.
They also operate at a scale of system integration that we cannot replicate independently. A good example is their Diamondback UGV that was shown at AUSA 2025. It is a multi-mission platform, and our radar is integrated as part of a broader sensor and autonomy stack, providing real-time airspace awareness that feeds into other onboard systems. That kind of integration into larger operational platforms is where a partnership like this really matters.
Our relationship with L3Harris works both ways though. What we bring is speed, focus, and the ability to innovate quickly at the sensor and edge layer using a more commercially aligned mindset. Larger organizations often have the scale, but not always the agility to move fast at a commercial pace. That combination has been very effective and I see that continuing into the future.
This shows how the real value is not just funding – it is access, integration at scale, and the ability to pair their reach with our pace of innovation.
Matthew Kling, VP and General Manager of AI Systems, MatrixSpace
Matthew and his team are creating intelligent sensing platforms that help people better sense and secure the outdoors. Their work combines radar, optical, and other sensing technologies through distributed AI — enabling users to see more, understand more, and act faster.
As one of the visionaries behind the company’s distributed sensing AI technology, Matthew leads the portion of the business developing MatrixSpace AiCloud and AiEdge, platforms that form the foundation of MatrixSpace AI.
A recognized expert in counter-UAS (CUAS) technology, Matthew has built his career developing advanced software, RF, sensing, and autonomy systems to detect and counter airborne threats. His prior work on complex defense systems at RTX and the research he led at Northeastern University on drone detection and mitigation helped establish the foundation for the CUAS innovations he is now pioneering at MatrixSpace.

