Full Motion Video (FMV) captured by Unmanned Aerial Systems (UAS), ground mobile platforms, and fixed persistent surveillance systems is emerging as a very powerful weapon in the arsenal of remote sensing. Harris is providing the Intelligence Community and their customers with the capability to harness that power into an intelligence resource for advanced processing, exploitation, and dissemination.
At the core of these capabilities is the Harris Full Motion Video Asset Management Engine (FAME), a COTS-based solution developed from decades of experience in the commercial broadcast industry. FAME is a video ingestion, management, and distribution architecture that provides the infrastructure for improving the way that video and other sources are ingested, cataloged, retrieved, and distributed. FAME integrates proven COTS products and practices from Harris commercial broadcast business with the image processing, system integration, and security expertise we provide to our government customers.
Designed with input from government intelligence analysts, FAME is a collaborative platform that provides video, audio, and metadata encoding, video analytics, and archive capabilities within a unified full motion video solution.
It provides a platform where various metadata tracks are integrated and referenced against each other and against the content for intelligence fusion.
Simultaneous video feeds, received in multiple formats from multiple sensor types, can be ingested, annotated, discovered, exploited, and shared in real time. Discovery and dissemination of FMV products within bandwidth-challenged networks for situational awarenessenabled with products such as Harris Falcon III® AN/PRC-117G manpack radios are supported with a thin web-based client for at-distance access and collaboration.
Harris provides analysts the capability to select from among multiple live feeds or prerecorded streams and to perform exploitation and dissemination in real time within a collaborative environment. Analysts can now collaborate simultaneously to annotate the video with mission text chat, telestration, and audio annotations such as universal time, video time code, and geospatial position are saved as rich metadata and are associated with the video content for later search, retrieval, and publication to the Distributed Common Ground Station (DCGS) Integration Backbone (DIB).
The Harris FAME-based solution solves many of the current issues that limit the exploitation of Full Motion Video. It provides for more robust archival, search, and retrieval capabilities. It associates extensive metadata with the video content for more efficient discovery and resolves the issue of video data essentially falling on the floor. Perhaps most importantly, it enables collaboration among multiple distributed users to yield better and more accurate actionable intelligence.
Enhanced Full Motion Video Capabilities
- Feeds Ingests analog or digital video feeds with embedded (KLV or ESD) metadata; baseband, MPEG2, and H.264 streams in Standard Definition (SD) and High Definition (HD). Wide Area Large Format (WALF) feed ingest is a planned enhancement.
- Multiple Feed Support Enables selection of a feed of interest from a display of multiple real time incoming feeds from UAS or other platforms.
- Metadata Extraction Automatically extracts KLV metadata encoded in the FMV streams or files in compliance with MISP and STANAG 4609 standards.
- Video Exploitation The user, too, can add annotated text, telestration, and other sources as data.
- Interface to DIB (DSCS Integration Backbone) Extends motion imagery asset discovery, data fusion, and publishing.
- Transcoding/Transrating Automatically adjusts formats and sizes to disseminate video to multiple users and platforms, including disadvantaged users.
- MISB Standards Employs Motion Imagery Standards Board (MISB) standards for FMV and metadata.
- Scalable Architecture Hardware and software elements are adaptable to the number of video feeds and clients to be serviced, as well as to the amount of storage required to support the database.
- Collaboration Enables multiple users at disparate locations to collaborate through telestration, audio, and chat, all stored for future reference.
- Display Control Enables the user to pause, rewind, slow-motion, archive, clip, and disseminate ingested content.
- Data Fusion Enables the fusion of related data, such as maps, previous motion imagery, graphics, SIGINT, etc., through overlays.
- Web Enabled Accomplishes discovery, communications, and interaction through web services. Web client allows streaming of low resolution content upon discovery using search or browse capabilities.
- Search Criteria Enables the search and retrieval of motion imagery assets based on a large variety of criteria, including geospatial, temporal, and audio.
- Products Provides an exploited FMV product with metadata, including Internet chat multiplexed into the MPEG2 transport stream. Exports NITF imagery files captured from video frames along with embedded metadata.
Note: All images used in this article are courtesy of Harris Corporation