Skip to main content
Top of Page

What We Do

We are always looking for partners to explore cutting-edge research projects in these areas.

The Laboratory for Analytic Sciences’ research themes focus on using data to improve intelligence analysis. Some of the questions arise from properties of the data—they might be large, streaming or heterogeneous—while others arise from applications specific issues.

Sensemaking

How can technology help intelligence analysts determine what’s important in order to effectively perform their analysis?

When it comes to content triage, LAS works on the scalable extraction and summarization of text, image, speech, audio, and video content, in order to facilitate its discovery and use in intelligence analysis workflows. Application areas include:

  • Video Sensemaking helps analysts understand vast amounts of video data by making video content searchable, summarizing it, and alerting users to videos of interest, all while incorporating user feedback to improve the systems.
  • Audio Sensemaking involves the methods and technologies employed by language analysts to extract valuable intelligence from intricate, real-world foreign language audio. This process addresses significant challenges such as unwanted background noise, a multitude of speaking styles, subtle cultural nuances, and the sheer volume of data, all while incorporating user feedback to improve the systems.
In this video, LAS staff outline the need for researchers who can make videos searchable, improve the design of user interfaces, and summarize hours of video data.
In this video, LAS staff explain the value of extracting information and understanding from audio source data done by language analysts.

Sensemaking Project Examples

Operationalizing AI/ML

How can we decrease the costs of machine learning? Where can AI/ML give analysts a radical strategic advantage? 

Our research focuses on understanding and evaluating the capabilities and behavior of artificial intelligence/machine learning (AI/ML) technologies, to facilitate their appropriate integration into operational intelligence environments. Application areas include:

  • AI Benchmarking involves testing and comparing AI solutions using standardized methods to evaluate their performance, capabilities, and reliability, ultimately informing decisions about which AI models to use in various intelligence analysis applications.
In this video, LAS staff explain the lab’s interest in efforts to develop novel benchmarks that focus on use cases aligned with mission; ways to better understand the safety, security, and total cost of an AI system; and projects around test and evaluation of agentic and other forms of AI integrated systems that would incorporate additional tools to provide capabilities beyond just the AI model itself.

Recent Project Examples

Human-Centered AI

How can technology help intelligence analysts lighten their cognitive load? 

Human-centered AI projects focus on improving how analysts can partner effectively with automation, particularly by exploring novel user-experience designs that integrate state-of-the-art AI and ML capabilities. Application areas include:

  • AI-Enabled Workflows research examines efforts to build trust and collaboration between humans and AI by addressing the common pitfalls of over- or under-reliance on automated tools through: enhancing credibility, strengthening approaches to evaluation, and transparently communicating performance and uncertainty.
  • Edge AI/ML processes data directly on devices or near the source to overcome limitations of cloud-based AI, especially in places with limited connectivity. Edge AI/ML improves real-time triage and analysis, and enables users to customize data filtering for specific insight.
  • Agentic AI research involves developing AI systems that can make decisions, perform tasks, and integrate with diverse information systems with minimal human input, aiming to enhance intelligence analysis efficiency while ensuring strict adherence to accuracy, objectivity, and compliance standards.
In this video, LAS staff share examples of prior work and discuss the main components of AI-enabled workflows: enhancing sourcing, faithfulness, and attribution; evaluating explainable AI and language analysis; and communicating AI performance, uncertainty, and limitations.
In this video, a fictitious scene demonstrates how an edge device with its onboard AI can turn a chaotic and overwhelming moment into a manageable and efficient emergency response, ensuring that the most critical patients receive the life-saving attention they need first. LAS staff explain how this technology can also be used to triage intelligence information.
In this video, LAS staff outline the need for researchers who can contribute to designing and building trustworthy, autonomous, multimodal, and explainable AI agents for use in mission-critical environments.

Human-Centered Project Examples

Human-centered AI projects demonstrate ways to address mission challenges related to enhancing the effectiveness of human analysts working with automated technology.

The Analyst Experience

This platform is a handbook for those looking to collaborate with the intelligence community. Peek into the world of language analysts, intelligence analysts and cyber/computer network analysts.

Monitoring. Attentive involved bearded young man with keyboard sitting watching in front of computer screens
A Day in the Life of a Fictitious Analyst: “Ferris,” a fictitious expert language analyst, is one of the personas developed by LAS and outlined on the TAE website: “Ferris starts [his day] by reviewing and correcting a translated transcript, adding words and contextual notes to help the translator improve their work.” (Source: Adobe Stock)