
Example synthetic training data image rendered using VFRAME (in Blender) used for developing a cluster munition object detector for Mnemonic.org. © 2021 Adam Harvey
VFRAME #
VFRAME.io (Visual Forensics and Metadata Extraction) is a computer vision toolkit designed for human rights researchers. It aims to bridge the gap between state-of-the-art artificial intelligence used in the commercial sector and make it accessible and tailored to the needs of human rights researchers and investigative journalists working with large video or image datasets. VFRAME is under active development and was most recently presented at the Geneva International Center for Humanitarian Demining (GICHD) Mine Action Technology Workshop in November 2021.
Visit VFRAME.ioVFRAME began as an exploration in 2017 with researchers at the Syrian Archive in Berlin to determine if computer vision coud be applied to their archive. During the last 4 years, the VFRAME project has developed several techniques to carry forward this goal. The most recent development is a 3D-rendering system for creating high-fidelity training data. The results show a promising accuracy of over 95% on realistic benchmark dataset. Below is an example detection accuracy on the AO-2.5RT cluster munition that has been trained using only synthetic (3D rendered and 3D printed) data sources.

Demo of VFRAME’s AO-2.5RT object detection algorithm trained using only synthetic data sources. © 2021 Mnemonic.org and Adam Harvey

Demo of VFRAME’s RBK-250 object detection algorithm trained using only synthetic data sources. © 2021 Mnemonic.org and Adam Harvey
For the recent updates about VFRAME and to learn more about the project visit VFRAME.io,