Skip to main content

What We Do

We are always looking for partners to explore cutting-edge research projects in these areas.

The Laboratory for Analytic Sciences’ research themes focus on using data to improve intelligence analysis. Some of the questions arise from properties of the data—they might be large, streaming or heterogeneous—while others arise from applications specific issues.

Content Triage

How can technology help intelligence analysts determine what’s important in order to effectively perform their analysis?

When we are faced with large data sets – like video, images, audio and text – how do we prioritize what’s most important? The process of quickly and easily finding valuable information within large data sets is key for intelligence analysts. Content triage projects aim to improve analysts’ ability to efficiently and effectively search, explore, prioritize, retain and extract value from an ever-increasing amount of data in various formats. We research and apply methods that enable the analysis of content in order to find the most relevant information. 

Project Examples

Content triage projects create novel interfaces for exploration and discovery in large volumes of text.

Text-to-video search project

LAS is collaborating with Gedas Bertasius from UNC-Chapel Hill to create a parameter-efficient transfer learning system for large-scale text-to-video retrieval, allowing analysts to search for a broader scope of objects or activities in large amounts of video data without the need for additional annotated data or extensive model retraining.

Speech emotion recognition project

This project with academic collaborator Carlos Busso at the University of Texas at Dallas explores the preference learning of emotional speech and the development of machine learning algorithms to identify emotional similarity between audio recordings. These will provide valuable tools for information retrieval with voice data.

Human-Machine Teaming

How can technology help intelligence analysts lighten their cognitive load? 

Technology may reduce the burden associated with analytic tasks, like content triage, but for it to be effective we need an understanding of how people actually incorporate new technologies into their workflows. Human-machine teaming helps analysts gather data and produce intelligence on adversaries’ actions, intentions, and capabilities so decision-makers can make plans and execute policies and operations. Projects in this space build models of what analysts do, identify tasks that could be improved, develop interventions, and identify current analytic methods that could be used in new ways to enhance analyst tradecraft.

Project Examples

Prototype application/method to support analyst cognition

A project by the Hunt Laboratory at University of Melbourne aims to improve how analysts consume and make use of data and information in their workflows by leveraging Narrative Abduction, a model of analytic cognition.

Construction of workflow graphs and recommenders

Insider Threat is an evidence-finding game developed by University of Kentucky’s AI/ML researchers Brent Harrison and Stephen Ware. In the game, workflow graphs are created from users’ past behavior and activity logs. Recommenders then make predictions about likely next activities to offer as suggestions to analysts doing a similar task. 

Operationalizing Artificial Intelligence and Machine Learning (AI/ML)

How can we decrease the costs of machine learning? Where can AI/ML give analysts a radical strategic advantage? 

Our research on machine learning (ML) and artificial intelligence (AI) focuses on questions around how machine learning techniques can still be useful even when working under a variety of constraints in an operational environment. This research theme supports LAS’s strategic goal of advancing the science of ML and operationalizing it for intelligence analysts. How can practical applications of ML be used to address challenges that mission analysts face now, or will face in the future?

Project Examples

Operationalizing ML and AI projects help reduce reliance on domain expert resources in data labeling.

Weakly and semi-supervised learning

The Cyber Snorkel project from industry partner PUNCH Cyber Analytics Group produced a software package that adapted the Snorkel weakly supervised labeling framework to work well in providing labels for netflow and other similar categories of data, making it easier to train machine learning models for cybersecurity applications.

Few-shot/zero-shot learning

A training workshop put on by national lab partner Pacific Northwest National Laboratory provided an opportunity for federal government staff to learn more about and get a chance to apply modern few-shot and zero-shot machine learning techniques.

Synthetic data generation

Expanding upon a senior design project, Fayetteville State University and LAS are collaborating to explore multiple ways to generate photo-realistic images to quickly cross-train computer vision models for image detection of rare and unique objects.