Thermal & Visible Light AI Camera Module – FAQ and Guide
Table of Contents
Please provide more information about this amazing module.
This is a dual-spectrum imaging module specifically designed for drones, integrating visible light and infrared thermal imaging with dual-spectrum fusion in a single compact unit. Key features include:
- All-Weather Performance: Capable of detecting and tracking targets in daylight, low light, smoke, or fog.
- AI-Powered Target Tracking: Built-in AI algorithms automatically recognize and track pedestrians and vehicles up to 200 meters.
- Compact & Lightweight: The module is ultra-compact (20 × 20 × 36 mm) and ultra-light (<37 g), making it easy to mount on drones, gimbals, and FPV systems.
- Low Latency & Efficient: Ultra-low power consumption (<0.8 W) with real-time imaging latency <60 ms.
How does it integrate with Betaflight?
The module is fully compatible with open-source flight controllers such as Betaflight. Integration features:
- Video Output: Standard video output interfaces for FPV or telemetry integration.
- Data Interfaces: Supports UART/I2C for transmitting AI recognition data or target coordinates to the flight controller, allowing the flight system to use this data for autonomous or assisted navigation.
- Quick Deployment: Can be mounted and integrated within 5 minutes due to its modular design.
Is it only controllable, or can it control the drone?
The AI module itself can directly control the drone’s flight. It provides intelligent imaging and target tracking data. The flight controller can then make decisions based on this information for navigation or mission execution.
The AI module is equipped with a UART interface that continuously transmits real-time positional data of detected targets, such as humans and vehicles, directly to the flight controller. By receiving these precise coordinates, the flight controller can dynamically adjust the drone’s attitude, orientation, and flight path, enabling autonomous target tracking and intelligent navigation. This seamless data exchange allows the drone to react to moving targets in real time, maintain stable flight while following or observing targets, and execute complex missions with minimal operator intervention. The integration of AI perception and flight control enhances both operational efficiency and mission safety, making it ideal for surveillance, inspection, and search-and-rescue applications.
Does the AI recognize visible images or thermal signatures?
The module supports identify visible light and thermal images of people and cars.
- Visible Light Recognition: For daytime or well-lit environments.
- Thermal Imaging Recognition: For night or low-visibility conditions.
- Dual-Spectrum Fusion: Combines visible and thermal imaging in real-time for enhanced situational awareness and accuracy.
Can I see a combined image from the thermal and conventional cameras?
Yes, the module supports target recognition and tracking. Once a target (person or vehicle) is detected, the UAV can be switched to image-guided mode via the remote controller. After activation, the module sends real-time target position data to the flight controller via UART, allowing the UAV to adjust its attitude and follow the target automatically.
How is the module physically connected to the UAV?
The module has two UART interfaces:
- One UART connects to the UAV’s remote receiver.
- The other UART connects to the flight controller.
The video output is provided via a CVBS interface, which can be transmitted through your video transmission system.
How is the module configured and set up?
There are two ways to configure the module:
- Using Betaflight software for UAV flight controller tuning.
- Using the manufacturer’s PC-based software for module configuration.
Detailed operation instructions and user manuals will be provided for both software tools.
Is the target acquisition automatic or manual?
The module performs automatic target detection. After enabling recognition and selecting the target via the remote controller, the UAV can enter image-guided mode and follow the target automatically.
How does the module handle UAV control, especially vertical movement?
Horizontal yaw control is straightforward. Vertical movement requires coordinated pitch and throttle control to maintain tracking accuracy and avoid losing the target. The module provides target position data to assist the flight controller in maintaining stable tracking.
More FAQs
Q: If the system detects multiple objects, how does it decide which one to follow? What happens if the system is already tracking one object but then detects another similar object? And what happens if the tracked object is temporarily occluded by another? Can you describe the system logic?
A: The system is designed to allow the operator to choose the object of interest. This is done using the remote controller, where the user can move the on-screen cursor to the target that they want to track. The detection algorithm can recognize multiple objects at the same time, but the tracking engine can only actively follow a single target. If a new object appears while one is already being tracked, the system does not automatically switch unless the operator decides to change the target manually. In the case where the selected object is temporarily occluded (for example, one vehicle passing in front of another), the system may lose lock on the target depending on the duration and conditions of the occlusion. Once the occlusion clears, reacquisition may or may not happen automatically, so operator intervention may be required in challenging scenarios.
Q: What UART protocol is used for communication? Is it CRSF, MAVLink, or something else? What does the protocol contain? Does it transmit control channels, special packets, or other kinds of data? Also, how much latency does your module introduce between the receiver and the flight controller?
A: The module supports the CRSF protocol, which is widely used for reliable, low-latency communication between receivers and flight controllers. Inside the protocol, the data exchanged includes standard channel information, telemetry packets, and other control-related data. It follows the same structure that flight controllers are already designed to work with, so there is no need for additional parsing or translation on the user side. Since the tracking module sits inline between the receiver and the flight controller, an important consideration is latency. In practice, the added latency is minimal—our tests show it contributes only about 10 milliseconds, which is negligible for flight control purposes.
Q: Can I see the software or at least a description of it? Understanding the configuration options is very important to me.
A: Yes. We will provide a standard operation manual for the module. This document will describe the software interface, available configuration options, and how to make adjustments for your specific use case. It will serve as a reference so that users can fully understand how to set up and customize the system according to their needs.
Q: A more critical question: the module itself does not directly control objects—it only provides relative displacement information. Doesn’t Betaflight need to be modified in order to process this displacement data? Or is there another method?
A: No modifications to Betaflight are required. The system has been designed to work with Betaflight as it is. Only some simple configuration steps need to be carried out within the Betaflight system to integrate the displacement information properly. All of these steps will be detailed in the user manual, so operators can follow clear instructions without modifying the firmware or code.
Q: Regarding the thermal imaging module—do you provide low-latency outputs such as MIPI CSI-2 or USB 3.0, or is it only USB? I have seen many USB-based modules that use ASIC chips to capture CVBS video, and each step in their processing pipeline requires frame buffering, which adds noticeable latency. Is there a faster solution available?
A: Our design is based on a proprietary ASIC chip for image processing. Unlike many conventional USB modules, the image in our system does not need to go through external memory or multiple buffer stages, which significantly reduces latency. For the visible-light camera, the measured end-to-end delay from image capture to transmission output is within 50 milliseconds. The infrared (thermal) imaging path has slightly higher latency due to the additional processing involved. If the onboard recognition features are enabled, the system delay can increase by several tens of milliseconds, depending on the complexity of the task. However, under normal operating conditions, the total latency is kept under 100 milliseconds, which is suitable for UAV piloting and real-time monitoring applications.

Ask A Question
Thank you for your response. ✨