top of page

ATTCS: TerraSense’s Achievement in Ground-to-Ground Automated Situational Awareness

  • Writer: Natalia Kaplan
    Natalia Kaplan
  • 7 days ago
  • 3 min read

Field-Validated Through ATLAS Uncrewed Ground Vehicle Trials with BAE Systems Australia


The Automatic Target detection, Tracking and Classification System (ATTCS) is a significant accomplishment in ground-to-ground sensing. Its real-world relevance was recently highlighted in publicly released trials of the Autonomous Tactical Light Armour System (ATLAS) uncrewed ground vehicle conducted by BAE Systems Australia, where ATTCS was integrated into the vehicle’s VANTAGE automated turret as part of a broader autonomy and mobility evaluation. The project established a new approach to automated situational awareness by creating an AI-driven, modular architecture capable of operating at the tactical edge.


Rather than a single application, ATTCS is a framework for real-time distributed sensor processing, data fusion, and actionability, designed to function with the constraints that exist outside of laboratories: limited bandwidth, non-deterministic latency, and complex ground environments.





Why Ground-to-Ground Is a Different Problem


Automated sensing matured first in air and maritime domains, where the environment is comparatively open, and targets are discrete against uniform backgrounds. Ground environments are fundamentally different. Terrain produces constant occlusion, clutter is persistent, and objects can stop, hide, or merge with their surroundings.


Sensor geometry also changes the problem. Ground systems often involve short baselines, overlapping fields-of-view, and uneven placement, not the wide separations typical of air or maritime ISR. Methods that work well for an aircraft against sky or a vessel on open water do not directly translate to a vehicle moving through treelines or a person crossing broken terrain. ATTCS was developed specifically to address these ground realities.


The Core Technical Uncertainties


The primary objective of ATTCS was to seamlessly process and fuse data from a heterogeneous, multimodal, wide-field-of-regard sensor array, providing a single, coherent output to the operator. 


  1. Deterministic, High-Precision Data Fusion in a Distributed System

    • The system architecture uses multiple independent edge computers, each processing a separate sensor stream. A key hurdle was correlating and fusing data between nodes, while ensuring data synchronicity at ingestion and transmission points which are subject to non-deterministic latency (jitter).

    • ATTCS achieved accurate data fusion through establishing a common temporal reference frame. This helped move the system from a theoretical prototype to a robust, real-time distributed system.


  2. Model Generalization and Performance at Extended Range (Far-Field Detection)

    • AI/ML models are challenged to maintain high detection and classification accuracy (e.g., mAP, IoU) for targets that occupy a minimal number of pixels (e.g., <10 pixels) at long distances. The project required the system to maintain detection and tracking capability out to objective ranges of up to 1000 m.

    • The project included focused research and refinement on deep learning models and algorithms to enhance generalization and performance in these "far-field" scenarios. This, alongside comprehensive data collection techniques, ensured the system can reliably detect targets at great distances, a critical requirement for enhanced situational awareness in real-world environments.


  3. Maintaining Seamless Tracking Across Sensor and Domain Transitions

    • A key uncertainty was how to track multiple targets seamlessly as they move through the overlapping fields-of-view (FoV) of different sensors/nodes and under different environmental conditions. 

    • Advanced fusion, sensor calibration, and tracking algorithms were developed to maintain a persistent, single-track ID for a target regardless of which sensor, or combination of sensors, was viewing it. The core logic of the system provides this continuity, overcoming the intrinsic difficulties of tracking in a multi-sensor array.


Operational Significance for TerraSense and Global Defence


  • Advanced Fusion & Modularity: The architecture is inherently modular and scalable, built to handle heterogeneous sensor configurations, modalities, and geometries. This flexibility allows for the rapid integration of different sensor types and arrays to meet diverse platform requirements.

  • Situational Awareness: By combining data from a fused sensor array and providing far-field DRI and tracking capabilities, ATTCS significantly enhances the operator's understanding of their environment, ensuring continuous, high-fidelity situational awareness.

  • Future-Proofing: The success in solving distributed processing and synchronized, real-time sensor fusion challenges means the core ATTCS technology can be adapted to various military and security applications, securing TerraSense's role as a leader in AI-driven tactical systems.

  • Covert and High-Survivability Operation: The ATTCS system is built around passive sensors (such as the visual and thermal modalities). This design ensures the system does not need to emit any detectable signals (unlike active systems like radar or laser rangefinders) to perform continuous surveillance, detection, and tracking. This drastically reduces the platform's electromagnetic signature, making it a highly covert and survivable asset in a contested environment.


From Research to Capability


ATTCS demonstrates that automated ground sensing can move beyond demonstrations into operationally credible systems. The project delivered deterministic fusion across distributed nodes, AI performance at extended range, and tracking that survives sensor transitions. These capabilities were validated through joint trials with BAE Systems in Australia, where the system was exercised in varied terrain and lighting conditions representative of operational use.


For TerraSense, it establishes a platform for future land-based sensing. For users, it replaces disconnected feeds with a coherent picture built for the realities of the ground domain.

Comments


bottom of page