Integration examples for MicaSense sensors

Standalone Camera Integration

A standalone camera integration is a fast and easy integration to set up, which is great for getting started collecting valuable data, but may be overly-simplistic for more advanced users’ needs.

Required connections and parts:

  • Included DLS+GPS, or DLS2
  • Mounting solution to secure the camera to the aircraft
  • Mounting solution to secure DLS to the top of the aircraft
  • Appropriate power source for the camera 

Triggering happens according to one of two automatic capture modes, either overlap or timer mode, which can be set up before flight by connecting to the camera’s WiFi and using the configuration page of the webUI (see the User Manual for your camera for details). Matching the camera settings to the aircraft’s flight plan parameters is important to ensure that the desired data is collected, but there is no need to have the camera and aircraft communicate directly.

Basic Image Readout with HTTP

In some cases, it may be desirable to save captured images to an external device, either for processing or storage, while operating the camera. The HTTP API has all of the features needed to support such an operation. 

Required connections and parts:

  • Included DLS+GPS, or DLS2
  • Mounting solution to secure the camera to the aircraft
  • Appropriate power source for the camera
  • WiFi or Ethernet-capable device to control the camera 

In order to see the images taken from a commanded capture, first take a capture using the capture route (/capture). A random image ID will be generated by the camera and returned in the HTTP response. This image ID can then be used to poll the status of the capture. Once the capture status is “complete”, the returned response should also contain paths for the images captured, which can then be used to download the images from the capture. The cached version of an image is valid only if more captures were not taken after the current capture, so make sure to download these prior to commanding additional captures. Also note that download speed over Ethernet will generally be much faster than over WiFi.

Example Command Sequence:

HTTP URL (over Ethernet)

Example Response



Downloads the raw cached version of band 1

Downloads the stored version of band 4


Increasing Metadata Accuracy by Synchronizing to an External PPS Signal

If GPS data is to be provided from an outside source, rather than the DLS, then the camera system should be connected to the PPS signal from the external GPS device, if possible. Using the same PPS signal ensures that the camera’s time is as synchronized with the rest of the system’s time as possible. In order to use an external PPS signal, the camera must be configured to receive the PPS signal as an input, through the web user interface, HTTP API, or serial API. Information on how to connect to and configure pins for external PPS use on different cameras is available for RedEdge and Altum. 

Using a PixHawk or Other Autopilot with MAVLink

The serial Mavlink protocol can be used to communicate between the camera and autopilot devices such as the PixHawk. After connecting the two devices, Mavlink can be used to send GPS, attitude, and system time messages to the camera, among other things. This data is used by the camera and injected into image metadata. A more detailed guide for setting up this system using a RedEdge can be found in the RedEdge and PixHawk Guide, and a full description of the supported MAVLink features is documented in the serial API.

Using Top of Frame Output with an RTK Receiver with Post Processing

MicaSense systems can be paired with RTK GPS systems, such as the Emlid Reach, and collect data to inject into image metadata after flight. Post processing RTK data provides lots of control, and can provide better positional results than in-flight “real time” RTK. It also is fairly simple to set up, since the only camera setup required is turning on the top of frame output, and connecting the ToF to the RTK GPS receiver.

When the camera begins to capture an image, a ToF signal is sent out. The RTK GPS can use this signal to record data at that exact moment and position. The RTK data can be aligned with each image after the flight completes, at which point the image file metadata can be adjusted to match the RTK recording, prior to photogrammetric processing.

A more detailed explanation of this process and its setup can be found in our Emlid Reach Integration Example.

Direct RTK Injection with HTTP

If an autopilot has an RTK GPS, it may be nice to provide the exact location and time of each image to the camera, so that the image metadata is accurate without any need for post-processing.The HTTP API routes can be used to inject RTK GPS data into the metadata of images as they are being created. For the injected data to be as accurate as possible, the integration must set up the camera to output a top of frame pulse, and use this pulse to determine the exact location and time for the image. 

To begin, take a capture using the /capture route, ensuring to set the use_post_capture_state property to true. Without this field set, the RTK data sent will not be used. The RTK data can then be injected by using the /capture_state route. Injected data must be posted to this route within one second of the top of frame. If this 1-second window is missed, the data will not be recorded to the image metadata, and the HTTP response will indicate the error.

Example Command Sequence:

HTTP URL (over Ethernet)

Example POST data

Example Response



(within 1 second of the top of frame)




Control the Camera from a USB-Enabled Device

If you wish to control the camera from a device that has USB ports, but no serial or Ethernet ports, it may be desirable to connect the device’s USB directly to the camera’s USB, rather than using a USB-to-Ethernet USB dongle in between. The RedEdge-M, RedEdge-MX, and RedEdge-MX Blue can be loaded with with a software variant with the suffix “-USBE” that will switch the USB port on the camera from running in “host mode” (required for connecting devices such as the WiFi dongle to the camera), into “device mode”, which makes the camera’s USB port appear as if the camera itself is a USB Ethernet dongle when connected directly to a USB host. To complete this integration, all that is needed is a cable with a USB-A plug on one side for the camera, and a USB plug of the correct type for the autopilot or controlling device on the other side. No other dongles or converters are needed, however you may need to install drivers, and you will likely need to set a static IP address on your controlling device to be on the correct network settings. Once connected, you can use the same web interface and HTTP API that are normally provided on the camera’s Ethernet port and WiFi, but with an IP address of

See also

Inputs and outputs for MicaSense sensors 

Have more questions? Submit a request