The Deeplomatics project aims at detecting, localizing and identifying low sound level acoustic sources generated during UAV intrusion on sensitive areas. To do that, a multimodal approach has been chosen. Acoustical sensors coupled to an AI processor running a neural network developed specifically for simultaneous detecion, localization and identification tasks allow for the analysis of the incoming acoustic waves 40 times per second. A data fusion software handles the information sent by all the sensors deployed on the surveyed area and computes the estimated position of the threat in a geographic referential. This position is then sent to an optical sensor mounted on a motorized touret that handles multiple wavelength cameras, including large and narrow field of view visible cameras, a thermal camera, and a Short wave Infrared range gated active imaging system. A second deep neural netword then focuses on detecting and identifying drones in the image in order to confirm the intrusion of a UAV on the surveyed area. A constant communication between the acoustical sensors, the data fusion software and the cameras allows for real-time detection, localization and identification of UAVs 5 times per second. The system has been deployed on Baldersheim’s proving ground in May 2022. This presentation gives an overview of this field experiment where 5 acoustic sensors and one optical sensor were connected to the data fusion software. 14 scenarios representing more than 95 minutes of UAV flights have been analyzed in real time, leading to a median radial position estimation error of 10.7 meters and a standard deviation of 15 meters. The data fusion process also lead to a significant enhancement of the identification rate of the incoming drone with 95% correct estimation in the presence of drone models that were in the training data base.