Skip to main content

What We Do

We are always looking for partners to explore cutting-edge research projects in these areas.

The Laboratory for Analytic Sciences’ research themes focus on using data to improve intelligence analysis. Some of the questions arise from properties of the data—they might be large, streaming or heterogeneous—while others arise from applications specific issues.

Content Triage

How can technology help intelligence analysts determine what’s important in order to effectively perform their analysis?

When we are faced with large data sets – like video, images, audio and text – how do we prioritize what’s most important? The process of quickly and easily finding valuable information within large data sets is key for intelligence analysts. Content triage projects aim to improve analysts’ ability to efficiently and effectively search, explore, prioritize, retain and extract value from an ever-increasing amount of data in various formats. We research and apply methods that enable the analysis of content in order to find the most relevant information. 

Project Examples

Content triage projects create novel interfaces for exploration and discovery in large volumes of text.

PATTIE transcription project

This project from industry partner Cymantix evaluates the effectiveness of the document clustering interface on poorly transcribed speech audio data. 

GRAFS prototype

The GRAFS prototype and paper from a UNC-Chapel Hill academic partner demonstrates a novel combination of faceted search and clustering of related topics

Human-Machine Teaming

How can technology help intelligence analysts lighten their cognitive load? 

Technology may reduce the burden associated with analytic tasks, like content triage, but for it to be effective we need an understanding of how people actually incorporate new technologies into their workflows. Human-machine teaming helps analysts gather data and produce intelligence on adversaries’ actions, intentions, and capabilities so decision-makers can make plans and execute policies and operations. Projects in this space build models of what analysts do, identify tasks that could be improved, develop interventions, and identify current analytic methods that could be used in new ways to enhance analyst tradecraft.

Project Examples

Levels of automation and performance outcomes

Work by Johnston Analytics is developing a framework for understanding how the level of automation of a system may impact performance outcomes, given defined inputs and measurements of disruption to workflows.

Prototype application/method to support analyst cognition

A project by the Hunt Laboratory at University of Melbourne aims to improve how analysts consume and make use of data and information in their workflows by leveraging Narrative Abduction, a model of analytic cognition.

Construction of workflow graphs and recommenders

In an evidence-finding game called Insider Threat developed by AI/ML researchers from the University of Kentucky, workflow graphs are created from users’ past behavior and activity logs. Recommenders then make predictions about likely next activities to offer as suggestions to analysts doing a similar task. 

Operationalizing Artificial Intelligence and Machine Learning (AI/ML)

How can we decrease the costs of machine learning? Where can AI/ML give analysts a radical strategic advantage? 

Our research on machine learning (ML) and artificial intelligence (AI) focuses on questions around how machine learning techniques can still be useful even when working under a variety of constraints in an operational environment. This research theme supports LAS’s strategic goal of advancing the science of ML and operationalizing it for intelligence analysts. How can practical applications of ML be used to address challenges that mission analysts face now, or will face in the future?

Project Examples

Operationalizing ML and AI projects help reduce reliance on domain expert resources in data labeling.

Weakly and semi-supervised learning

The Cyber Snorkel project from industry partner PUNCH Cyber Analytics Group produced a software package that adapted the Snorkel weakly supervised labeling framework to work well in providing labels for netflow and other similar categories of data, making it easier to train machine learning models for cybersecurity applications.

Few-shot/zero-shot learning

A training workshop put on by national lab partner Pacific Northwest National Laboratory provided an opportunity for federal government staff to learn more about and get a chance to apply modern few-shot and zero-shot machine learning techniques.

Synthetic data generation

A paper published at NIPS by an NC State academic partner established, for a particular use case, the benefits of machine learning training data augmentation with synthetically generated counterfactual statements (synthetic text expressing the opposite polarity or valence to the original source text).