Skip to main content

2022 Research Symposium


Each year, LAS undertakes a research program involving partners from a variety of academic, industry, and government communities. The outcomes of these research projects are of interest to our intelligence community stakeholders, as well as the respective communities of our academic and industry partners.

Research Projects

Due to the hybrid nature of our 2022 research symposium, this page provides visitors with an opportunity to learn about the research that LAS performed this year. Within the sections below, you will find videos and blog posts for each of our unclassified research projects.

We invite you to explore this year’s research, grouped by the following themes:

Building a Diverse, Expert Intelligence Community Workforce

Eric Ragan, Donald Honeycutt, Jeremy Block, Brett Benda

The project studies methods for automated summarization of analytic workflows to aid reporting, and collaborative analysis hand-offs, and meta-analysis of analysis processes. Using intelligence analysis as the base context, the team developed a proof-of-concept software implementation that generates a summary of the analysis period as a sequence of smaller, more human-understandable visual snapshots of analysis steps over time. The research aims to reduce analysts’ overhead of manual annotation and report preparation by operating on software logs of user actions recorded as they conduct analysis. The approach considers the provenance of an analyst’s interactions over time along with specific documents and data items associated with each action. To break the summaries into shorter periods of time, temporal segmentation is done by considering notable changes in terms and topics. After processing the interaction log data, the method generates summaries in both list and natural language formats that are display through an interactive visual interface. The visual summaries can be customized with varying level of detail and temporal resolution for summarized steps.

Tim van Gelder, John Wilcox, Christine Brugh, Michelle Kolb, Sue Mi Kim, Alfred Jarmon

Gregory Treverton recently wrote “I have come to think that intelligence is ultimately about telling stories. The reasoning process by which intelligence analysts reach findings by developing plausible stories is what we call “narrative abduction.” At the heart of this process is constructing various narratives to explain the evidence available about a situation, and evaluating their plausibility. When used this way, narratives are complex hypotheses, and while some research attention has been given to the development and evaluation of hypotheses generally, the distinctive issues and challenges arising in the case of narrative abduction have been almost completely neglected. This research project was aimed at improving narrative abduction in a team context. Our objectives were to create a new structured analytic technique (SAT) designed specifically for narrative abduction, and to prototype a software platform intended to support teams of analysts in applying that SAT to realistic intelligence problems. We called the combination of a SAT and a supporting platform a “framework” for narrative abduction. Early in the project, we chose to work in the “design science” research paradigm, which iterates design-develop-evaluate cycles with close involvement of representative users. We ended up developing and evaluating two quite different frameworks. In both cases, we developed the app component on top of existing commercial platforms, which provided a great deal of core collaboration functionality and thus enabled very rapid prototyping of new concepts, though at the cost of making it difficult to move beyond initial prototypes to more mature apps. Our first framework was based on idea that competing narratives are like alternative theories in science, and should be evaluated by being assessed on high-level “virtues” or criteria such as explanatory power and simplicity. Our second framework was based instead on closer attention to analysts’ natural or intuitive sense-making activities. Both frameworks had strengths but also serious limitations. We are currently drawing out the “lessons learned” for future research in this area. 

Sharon Joines, Hongyang Liu, Matt Schmidt, Lori Wachter

One of LAS’s goals is to expose LAS members to varied approaches for problem identification; exploration and synthesis; and solution and project development. Exposure can be augmented through participation in immersive experiences such as facilitated design research tools and technique sessions for project teams, project idea and solution critiques, and intensive Focused Discovery Activity (FDA) sessions focused on a single team or project idea.

In 2022, the Design Team’s activities were informed by the conceptual framework for supporting transdisciplinary and collaborative projects in which content, resources, and support are provided to LAS members regardless of their experience with design research. Our offerings focused on 1) supporting emerging LAS/IC projects moving forward by applying design approaches to research一tools and techniques (DARTT) to LAS/IC projects; 2) supporting IC experts to become skilled in the selection and application of design research tools and techniques; and 3) expanding design thinking and research resources including asynchronous and interactive videos, digital tools and toolkits, project templates and instructions, and curate resources.

James Smith, Felecia Morgan-Lopez, Sambit Bhattacharya

In the Fall of 2021, a partnership was formed between the LAS and the Computer Science Department at Fayetteville State University (FSU). Located 15 miles from Fort Bragg, FSU is a state-funded Historically Black College and University (HBCU) with a large military student population. Approximately 25% of FSU students are either active duty, veterans, or have prior military experience. The intent of the LAS and FSU partnership is to develop and leverage joint expertise in machine learning computer vision capabilities. This partnership was formed through the existing Educational Partnership Agreement that NSA has with the University of North Carolina (UNC) school system. Further, the LAS is seizing the opportunity to foster a continued relationship to address critical NSA challenges in support of Agency research, academic, and diversity priorities. The LAS collaborated with Dr. Sambit Bhattacharya, Professor of Computer Science at FSU, and the Spring/Fall 2022 Computer Science senior design students to research a concept that leverages Synthetic Data Generation and Generative Adversarial Networks on rare objects to improve the robustness of computer vision models for object detection. As a result of the collaboration between LAS and the FSU senior design class, the LAS team worked with the Office of Research and Technology Applications (ORTA) and FSU leadership to get the 5th Minority Serving Institution (MSI) Cooperative Research and Development Agreement (CRADA) officially established as of May 2022.

Judy Johnston, Rob Johnston, Christine Brugh, Brent Younce, Stephen Shaugher, Pauline Mevs

Johnston Analytics began its study of hybrid teams and purposeful automation by conducting an integrative literature review on the topics of levels of automation, human-machine teams, and decision science. The results of this literature review were used to support the development of a model that: identifies characteristics of automation in hybrid teams, potential impacts resulting from the use of automation, and their effect on performance. We created the model using Stella, a systems dynamics software application. The model was then populated with values based on use cases performed based on user experiences with: Social Sifter, the Belt and Road Initiative Dashboard, and the Common Analytic Platform. Simulations were run on these models and results were used to compare critical feedback loops, as well as to refine model elements and provide an initial validation of the model and its assumptions. The project poster displays a process map of the model and results of the simulations.

Patti Kenney, Sue Mi Kim, Michelle Winemiller, Jascha Swisher, John Slankas, Brent Younce, Ian Holmes

“Understanding the human” is acknowledged as a necessary prerequisite to successful integration of machine teammates into analyst workflows. PandaJam is an observational study designed to elucidate the workflow of language analysts and their cognitive processes. Specifically, we investigated the types and frequencies of errors made by analysts in an information retrieval from audio scenario. Using an elasticsearch-based query tool to explore thousands of hours of audio from the Nixon White House tapes and automatic transcriptions of this audio, analysts were tasked with answering four questions about the pandas that China gifted to the U.S. in 1972. Analysts were required to uncover and connect multiple pieces of information related to each question in order to complete the task correctly. A “think-aloud” protocol was employed, such that analysts’ narrated their thought process as they worked through the task; their verbalizations were recorded and transcribed, and their interactions with the query tool were logged. Human annotators used these artifacts to determine what “errors” resulted in analysts’ inability to completely and correctly answer a question. The outcomes of this study have important implications for technologists whose goal is to design and implement machine capabilities intended to augment the workflow of language analysts specifically, and of intelligence analysts in general. 

Alvitta Ottley, R. Jordan Crouser, Christine Brugh, Ken Thompson, Jacque Jorns

Prior research has demonstrated that the experience, personality, and cognitive abilities of the user can have significant implications for tools that support data-driven reasoning and decision-making. This project continues to advance an agenda for transforming the one-size-fits-all design methodology for visualization design, extending this line of inquiry beyond exploratory data analysis and goal-directed investigation, into decision-makers’ ingestion of analytic end-products (e.g. written reports, summary dashboards, and placemats) and their subsequent confidence and decision quality. It seeks to enhance decision-making capabilities by first systematically investigating how individual traits modulate decision-making behavior using visualization in the context of intelligence analysis and then using this information to establish design guidelines for developing reporting tools that can readily adapt to the needs of individual decision-makers. We present the findings of our studies, which include both traditional experiments and an interview study with domain experts, and highlight some key areas for future innovation. 

Sean Lynch, Aaron Wiechmann, Stephen Williamson, Mike Green, Tina Kohler, Patti Kenney

Since 2016, the LAS has been sponsoring capstone research projects for small groups of graduating seniors in the NCSU Computer Science department. The LAS benefits from talented students working on problems-of-interest to the lab, and the students gain a valuable experience challenging real-world R&D problems. These projects have spanned a breadth of technical topics over the years, typically in the areas of cloud computing efficiency, big data, and machine learning. In 2022, the LAS sponsored 4 student teams: Teams 1 and 2: Two of these teams worked on similar projects, namely designing and building a modern database structure known as a “knowledge graph” populated with the famous dataset of the Nixon White House recordings and the associated Nixon Presidential Daily Diary. These knowledge graphs support unique forms of insight, and the Nixon data makes for an interesting testbed. Team 3: A third student team created a capability to allow users to see the various types of JavaScript functions and properties accessed when their browser renders a web page. This offers a unique look into the operations of various websites actions and enables subsequent analysis. Team 4: A fourth student team is currently working to enhance the LAS machine learning data annotation tool Infinitypool. Specifically, the student team is creating a pre-annotation capability which will let Infinitypool automatically estimate and mark the annotations for a piece of data prior to presenting it to the human annotator. The user can then agree with this pre-annotation, or simply modify it to their liking, easing and speeding the process of human annotation. Come learn about these and other past Senior Design projects at the exhibit.

Rob Capra, Jaime Arguello, Bogeum Choi, Mengtian Guo, Jascha Swisher, Patti Kenney, Michelle Winemiller, John Light

Intelligence analysis of audio data often involves: (1) a triage stage in which data is reviewed, filtered, and annotated by one group of analysts; (2) an architecture to store the information extracted and annotated during the triage stage; and (3) a downstream analysis phase in which the conversations flagged during the triage stage are reviewed for further analysis by other groups of analysts.

A key challenge in this process involves developing processes, tools, and user interfaces for analysts to extract, annotate, and save information during the triage stage that will support downstream analysis. During the triage phase, analysts need to capture and annotate information about specific entities, events, and relationships. Additionally, analysts need to deal with the fact that potentially critical information may be implied, uncertain, vague, or even conditionally true based on other factors (e.g., timeframe). Knowledge graphs are one type of data structure that can support the representation of knowledge; the modeling of entities, attributes, and relationships; and the utilization of stored information by downstream humans and algorithms. Research is needed to develop architectures and interfaces that can effectively and efficiently support the workflows of intelligence analysts through human-in-the-loop processes.

The research proposed in this white paper outlines work in four main areas: (1) understanding what types of background and contextual information analysts want to capture during the triage process; (2) developing tools and user interfaces to support analysts in capturing, annotating, and modifying information as they transcribe and review audio conversations (i.e., during data input workflows); (3) evaluating the tools developed to support these data input workflows; and (4) developing techniques to effectively present structured annotations and knowledge for downstream analysis. 

Mark Wilson, Stephen Paul, Christine Brugh, Paul Narula, Michele Kolb

Team Strength is a suite of tools designed to help operational team leads identify where their teams are out of alignment and focus their efforts on areas where misalignment diminishes team performance. Team Strength leaves team leads in the driver’s seat, providing data-based insights into their teams together with a menu of potential actions.

Team Strength starts with an initial Team Strength profile tool, with follow-on tools to help address performance, work activity, motivation, customer, and mission-related alignment issues that the profile tool may identify. Each tool is designed to be light-touch, taking around 5 minutes for data collection. Because team alignment is not a static condition, Team Strength is designed for iterative use and provides trend analysis for up to three profiles. it is Excel and survey-based, so there is no custom platform requiring network support. 

Team Strength is the culmination of an LAS research effort on analyst performance, including prior work under Project Westwolf. It was developed based on industrial and organizational psychology principles and input from teams across the enterprise. While initially designed for first line analyst team leads, Team Strength tools and their precursors have been successfully used by other types of teams, and have been expanded to cover a wider range of work roles and multi-functional teams.

Helen Armstrong, Lori Wachter, Elizabeth Chen, Katie Denson, Elizabeth Gabriel, Brian Sekelsky, Jillian Swaim, Riley Walman, Jeff Wilkinson, Amanda Williams, Jacob Williams, Sean Lynch, Sue Mi Kim, Ken Thompson

When an event of national importance occurs, an Intelligence Analyst’s day may well be 24 hours long in order to respond to the needs of national security. But most mornings, analysts commute from home to arrive at a desk where complex analysis tasks from the previous day make it challenging and time consuming to begin the day’s work. This project explores how the design of a Tailored Daily Report (TLDR) might use the affordances of machine learning to provide a personalized user experience so that an analyst might quickly and knowledgeably enter the day’s workflow. The project focuses on three core questions: “How might the interface tailor content to summarize the analyst’s most recent and relevant workspace?”; “How might the interface identify or recommend relevant new information or data that has transpired since the analysts’ last session?”; and, “How might the interface enable the user to quickly drill down into deeper levels of data or analytical results, so that the user might query the information presented?” Researchers from the Master of Graphic & Experience Design Program, led by Professor Helen Armstrong and working in collaboration with LAS analysts, utilized user-experience methods such as interviews, personas, scenarios, user journey maps, benchmarking and rapid prototyping to research and develop innovative solutions that address user pain points and opportunities. The resulting prototypes revealed issues such as the importance of responding to different levels of analyst expertise and encouraging cross-collaboration, indicating overlapping sources or information, and communicating uncertainty in AI generated data.

Brent Harrison, Stephen G. Ware, Anton Vinogradov, Rachelyn Farrell

We applied a variety of artificial intelligence techniques to summarize and visualize the process analysts follow when answering questions. We collected detailed data on the searches analysts perform, the documents they read, and which documents they consider important from two sources: a serious game we designed called Insider Threat and LAS’s PandaJam project. Using data segmentation, machine learning, and clustering, we detect when analysts are doing the same activity. We visualize the order and popularity of actions analysts take as a graph or flowchart. Using this data and our model of the problem, we can also make intelligent suggestions to what an analysts working on the same problem might do next. Suggestions are based on two different methods: one that uses machine learning to build a model of how other analysts solved the same problem and one that uses a formal model of the problem to recognize the beliefs and intentions of the characters involved. Our results show that our visualization methods can generalize to multiple domains and lay the groundwork for tools that make intelligent suggestions to analysts based on a wide variety of AI methods.

Data Science and Machine Learning in Content Triage

Alexandros Kapravelos, Junhua Su

Web attacks are becoming increasingly more sophisticated and targeted. One method to provide a more secure web environment is through better detection of malicious activity. With JavaScript, existing options include cumbersome and hard to maintain in-browser solutions or inline, yet highly detectable, solutions. A new research effort, NCSU’s VisibleV8 framework, offers another direction through transparent instrumentation of Google’s V8 JavaScript engine. VisibleV8 is a dynamic analysis framework hosted inside V8, the JavaScript engine of the Chrome browser, that logs native function or property accesses during any JS execution. This allows researchers to have a deeper insight into the internals of the browser when it executes malicious JavaScript code. In this proposal we aim to leverage VisibleV8 to build behavioral models of malicious JavaScript code. These behavioral models of webpages could be used in other cybersecurity applications as an innovative and potentially automated way to detect malicious activity in dynamic web content.

Andrew Freeman, Montek Singh, Ketan Mayer-Patel

While object detection based on convolutional neural networks (CNNs) is a well-established technique, these systems only examine single image frames, thus failing to exploit the temporal redundancies of frame sequences in video sources. Existing attempts to incorporate temporal features in object detection tasks typically rely on recurrent layers and hand-crafted network designs. Our proposed solution, by contrast, combines an out-of-the-box 2D object detectors with a novel spatio-temporal video representation called ADΔER. Through a video pre-processing stage, we produce discrete images which carry features indicating temporal value stability for each pixel. We then input these images to an unmodified YOLO architecture for object detection and classification. We extract frame-level motion summary information from our ADΔER representation and visually present it alongside object classifications in an web dashboard. Analysts can then navigate our video player to find video timestamps with the motion rates and classes of interest. We compare the object detection performance of our system with and without temporal features in the image representations when trained on the BDD100K video dataset and the YOLOv5 architecture.

Stephen Williamson, Sheila Bent, Felecia Morgan-Lopez, James Smith, Tina Kohler, Al Jarmon, Jose Quinones, Brent Younce, John Slankas

The LAS EYERECKON project is creating an unclassified video content triage processing pipeline to demonstrate the full scale of MLOps, to include: techniques, tools, workflows and resources. With unclassified video content provided by customers, LAS has crafted a demo to showcase how video analytics can be natively incorporated and prioritized within the intelligence analyst workflow. Key areas of research for this project include object detection, recognition and tracking, processing and storage of ML output, user search capabilities, and summarization of video data (e.g. geolocation focused). Stop by our booth to see the demo in action and learn all about our use-cases and future efforts on day 2.

Christine Brugh, Natalie Kraft, Stephen Shauger, Michelle Winemiller

Looking to understand the complex relationships between Chinese foreign investment, public opinion on Chinese development, and microeconomic features, we have developed interactive visualizations and models to analyze the impact from the Belt & Road Initiative. The BRI is a Chinese-led global development strategy promoting regional integration and economic stimulation through partnerships with over 135 countries. Through spatio-temporal analysis of BRI projects, institutional strength, and standard of living indicators, we have been able to identify feature sets related to sentiment toward China. To support analysts in interpreting this data, interactive visualizations have been prototyped to allow for comprehensive regional analysis based on our harmonized datasets. This interactive dashboard can support analysts’ workflows through providing geospatial visualizations that centralize data and clarify the impact of influence by region. Applications of this research include supporting strategic objectives for identification of areas threatened by Chinese influence and changes in sentiment towards Chinese influence in local communities. 

Nathan Clausen, David Schumann, Jay Revere, Agata Bogacki, Logan Perry, Patrick Dougherty

The IC is increasingly using artificial intelligence (AI) to cope with the vast, disparate, and dynamic data it collects and processes. Within this vast amount of data, a significant portion is unstructured and non-text-based: audio, image, video, and geospatial. As the data volume increases, IC analysts must process and analyze it efficiently while providing relevant insights necessary for mission success. SAS proposes collaborating with LAS to develop an automated, uniform interface that allows an analyst to search and triage image/video content based on ML output and validate mislabeling events to be sent for automatic retraining. This process allows the analysts to efficiently retrieve relevant information in combination with efficient ML development without the need for analysts to have a deep statistical understanding. SAS would like to execute on the following three tasks: 1) deploy an open-source computer vision model to a corpus of image/video data, 2) create and identify an ideal interface that allows an analyst to easily query and sift through relevant information, and 3) provide an analyst with a sample of potentially mislabeled images to validate and send back to model training for continuous improvement. 

Carlos Busso, Abinay Reddy, Tina Kohler, Brent Younce, Michelle Winemiller, Christine Brugh

With the overwhelming amount of information available through current media domains (e.g., Internet, TV), it is crucial to develop algorithms that are able to automatically process, identify and preselect segments with potential threatening behaviors. Experiencing and expressing emotions are fundamental characteristics of human beings. Recent findings suggest that emotion is integral to our rational and intelligent decisions. Emotion helps people relate with each other by expressing feelings and providing feedback. The underlying emotional content in a recording can indicate aggressive or disruptive behaviors during a conversation. Identifying and characterizing these emotional behaviors are challenging but important research initiatives in the context of national security, and, therefore, are relevant for the U.S. Intelligence Community (IC). We aim to develop robust speech algorithms to retrieve emotion salient segments that can uncover and rank order potential threats. The outcome of this research will be a distributed, portable framework able to automatically process thousands of hours of speech recording, tailoring and facilitating the work of forensic experts, who will only have to review a reduced portion of the data. Conventionally, automatic speech emotion recognition (SER) systems either classify affective behaviors into emotional categories or predict emotional attributes such as arousal (calm versus active), valence (negative versus positive) and dominance (weak versus strong) with regression models. Studies have shown that ranking emotional attributes through preference learning methods has significant advantages over conventional emotional classification/regression frameworks. Preference learning is particularly appealing for retrieval tasks, where the goal is to identify speech conveying target emotional behaviors (e.g., negative samples with low arousal). This project explores advances in deep neural networks (DNNs) to build preference learning algorithms that can retrieve target emotional behaviors from a large speech repository. 

Jonah Schiestle, Michael Almeida, English Grey Sall

PATTIE is an information retrieval platform initially developed for traditional document stores (i.e., PDFs, text documents, literature, etc.). In 2022, a significant amount of work was conducted to adapt the platform to multi-party conversational audio. In this live (and recorded) demonstration of PATTIE, our team will showcase the basic features and functions as well as the multilingual capabilities we have worked on across English, Russian, and Farsi.

Tina Kohler, Patti Kenney, Sean Lynch, Jacque Jorns

Language analysts are readily able to triage voice data in audio repositories, yet few options exist that allow analysts to triage non-speech sounds of interest. This project seeks to provide analysts with the ability to triage for the presence of user-defined, non-speech sounds-of-interest, in audio recordings. One concept for how this would be deployed in an analyst environment would be for analysts to submit a reasonable number of training samples of a particular sound-of-interest to a system, which would then train and apply a detection model to promote recordings containing that sound for further review. Capabilities of this nature have already found practical applications within private industry; one example is gunshot detectors as used by military and police forces. The SED project idea is to develop a generalized form of the non-speech detection capability, in an automated system, in order to enable analysts to detect sounds of interest to them.

Blake Hartley, Mike Geide, Stephen Williamson, Lori Wachter

The goal of TLeaves was to leverage the tools of Machine Learning to create a Recommender System for prioritizing data for cyber analysts. Using our work on RADS as a foundation, we focused on using Transfer Learning to both build and train Recommender Systems on unclassified data before transferring them to the classified environment. Our approach focused on three key methodologies within Transfer Learning: Subset Transfer, Model Transfer, and Time Domain Transfer. First, Subset Transfer allowed us to move from smaller datasets to larger datasets, and guided us in creating subsets of the large secret dataset. Next, by carefully choosing and manipulating unclassified datasets, we were able to use Model Transfer to transfer models between the datasets with minimal adjustment. Finally, Time Domain Transfer allowed us to study how models changed as the underlying data changed and helped us identify which model features were most important for predicting future behavior. We found that these approaches were well suited to the problem, and plan to continue this line of research.

James Smith, Felecia Morgan-Lopez, Al Jarmon, Dave Wall, Sambit Bhattacharya, FSU Senior Design Students

Computer vision models (CVMs) are typically trained on existing datasets for the detection of known objects. However, for rare objects or events, there may be insufficient representation of the objects for proper training of CVMs. For detection of rare events, CVMs could ultimately become biased toward the most represented categories or objects in a dataset, or “false positives,” and be likely to misidentify rare objects or events. Rare objects in this context refers to objects that are underrepresented or not represented at all in a data. For 2022, the LAS partnered with Fayetteville State University, a minority serving institution, to devise the first phase of a rendering engine that leveraged Synthetic Data Generation and Generative Adversarial Networks on rare objects to improve the robustness of CVMs for object detection. 

Tina Kohler, Michelle Winemiller, Patti Kenney, Jacque Jorns, Pauline Mevs, Sean Lynch, University of Texas at Dallas

This research project goes beyond targeted searching of voice data for specific information, and enables analysts to learn more about the voice data in a repository when traditional search tools are ineffective. Providing analysts with algorithms to characterize voice data can reduce the amount of data to a size small enough to discover unknown information in a repository. According to the project’s analyst SMEs, this capability will aid them in finding speech containing segments discussing compelling information.

This year, the voice triage team investigated methods for detecting high-arousal speech in voice segments. High arousal is a state of feeling awake, activated, and highly reactive to stimuli, when a person has feelings including high energy and tension and the body is in a state of relative heightened responsiveness, prepared for action. By isolating high arousal speech, analysts have a starting point for exploring a corpus of voice data.
Our test corpus was the University of Texas at Dallas Multimedia Signal Processing Laboratory’s podcast corpus, containing nearly 20,000 segments of speech. We experimented with many combinations of feature vectors and machine-learning algorithms, and performed over 60,000 experiments this year in which we not only decreased the error rate by over 9%, but also learned a great deal about features, algorithms, and corpora.

Scaling AI & Machine Learning to Support Next-Generation Platforms of Machine Learning

John Slankas, Jascha Swisher, Brent Younce, Lori Wachter, Liz Richerson

Over the past four years, LAS has relied upon Amazon Web Services (AWS) to provide our unclassified computing infrastructure. The lab utilizes a number of different offerings from AWS: virtual machines (EC2), large object storage (S3), machine learning (Rekognition, SageMaker, Textract, Transcribe, Translate), Elastic Map Reduce (EMR), and other supporting services. We have also developed a custom application, AWS Commander, to manage this environment. Within AWS Commander, we seek to provide common functionality to our user base to significantly reduce the management overhead. Through the application, users can provision new EC2 instances, schedule those instances, create web proxies, and manage disk space. Users can also self-provision a “personal workstation” that provides a complete data-science image including several programming languages, various IDEs, and Jupyter. These workstations can optionally have graphical processing unity (GPU) capabilities for deep learning activities. In 2022, we supported the participants for the Summer Conference on Applied Data Science within this environment. AWS and AWS Commander provide the compute functionality to meet the unique processing needs of those participants. We also engaged with the NC State Computer Science Senior Design Project to build EMR capabilities into AWS Commander.

Mohammad Rostami, Jesse Thomason, Tejas Srinivasan

Transformer models are being adopted in various data modalities due to their state-of-the-art performance. Despite this success, learning different tasks in isolation with transformer models is inefficient because we need to train one model per each given task. We develop a framework for learning and composing Adapter-based modules in a transformer with multimodal inputs in continual learning settings, where tasks arrive in a sequence, such that we can benefit from knowledge transfer and improve the efficiency of training. We consider vision-and-language classification tasks and build a new dataset as the first area of exploration. Our dataset consists of four vision-language multimodal tasks and five downstream single-modality tasks. We studied the performance of existing continual learning strategies, including, experience replay, weight consolidation, and adapters, using two transformer models on our new benchmark and determined the weaknesses of existing algorithms. Next, we developed a new algorithm to generate adapter weights using a hypernetwork that helps to use the same model across the tasks. Experiments demonstrate that hour method is effective and leads to improvements compared to the precedent.

Ben Strickson, Chris Evett, Devon Barrett, Cameron Worsley, Stewart Bertram, Syra Marshall, Giorgos Georgopoulos, Lori Wachter, Alfred Jarmon, Michael Green, Susie Bitters

Human analysts are a fundamental part of the process to convert unstructured cyber threat intelligence (CTI) reports into structured intelligence. Recent advances in technology provide analysts with an array of automated tools to assist with this process. While much effort has gone into developing the machine learning (ML) behind these tools, little consideration has been given to the user experience (UX) of the analyst. This is important because poorly designed tools can lead to negative consequences such as increased cognitive burden, or a breakdown in trust between analyst and technology. Elemendar proposed, with the support of LAS, to answer the following question: how can analyst task performance be empirically assessed and improved when new ML tools are introduced. Alongside this quantitative analysis of an automated cyber tool we will perform qualitative evaluations, with a specific focus on the issue of trust in ML. Our aims tie closely into the research aims of LAS and the wider defense community at present; who feel that ML tools have reached a reasonable state of maturity and that the blocker to further adoption is effective integration. Our research from our WP1 experiments highlighted this specific issue, a number of analysts generated feedback which indicated they were unable to place their trust in the ML software tool that was being trialed. To address this problem, our research for WP2 reviewed and validated various ways to improve ML explainability through software visualization. For WP3 we investigated whether it is possible to further improve trust in our tool through analyst led ML training.

Video: Explainable Ml for Cti Analyst Workflows

Kaitlyn Yingling, Adam Anderson, Martin Courtney, Stephanie Allshouse

Frequency hopping signals often require labor intensive analysis by humans to identify, detect, and process due to their unique signal characteristics. Some hop patterns cover a wide bandwidth with long hop patterns over time which can be difficult for a human to discern. This project investigates methods to leverage YOLOv5 to detect frequency hoppers similar to how YOLOv5 conducts object detection in cluttered images. 

Patti Kenney, Susie Bitters, Skip Smith, Michael Green

GUESS explores the application of keyword augmentation, semantic search, topic modeling, and topic segmentation to the task of information retrieval from room audio. Long duration of recordings, multiple speakers, and poor audio quality all make it difficult for analysts to find useful information in room audio recordings. Using automated speech-to-text transcription can be helpful, but suffers when the transcription quality is poor and many words are mistranscribed. These difficulties can be mitigated by using context-aware sentence embeddings created by a large, pre-trained neural network. GUESS first asks an analyst to enter one or more analytic questions describing their search, and then demonstrates a process by which analysts can iterate through traditional faceted text search with keywords or other limiters like a date range, view semantic similarity results to find similar sentences in the transcriptions to their analytic questions, and analyze sentence-level topic modeling and topic segmentation results to identify key contextual information. The methods demonstrated by GUESS recognize that intelligence analysts seek evidence and not just answers from their analytic queries, and therefore require a different experience than users of Google, Alexa, or Siri might in similar contexts.

Joe Anderson, Daniel Cornell, Alon Greyber, Simon Griggs, Kaylie Naylo, Stephen Williamson, Sean Lynch, Skip Smith, Aaron Wiechmann

LAS partnered with an NC State Computer Science Senior Design Team to incorporate a model assistance annotation service into LAS’s Infinitypool application. Infinitypool is a customizable, collaborative, and compliant data labeling and data evaluation application that provides an essential human-machine interaction needed to support data triage and annotation. Given the human resources needed to generate quality, individualized feedback, this project works to minimize the human time needed while maintaining data annotation quality and integrity. 

Troy West, Jascha Swisher, Mike Green, Brent Younce, Aaron Wiechmann, Ryan Bock

LAS is investigating methods for consistently storing or deploying machine learning (ML) models for testing and evaluation. The current work has focused on understanding the open source model deployment technology ecosystem, deploying Bailo a model repository and developing an early prototype model deployment service (MDS) using Seldon-Core and Kubernetes for running containerized ML models in scalable architecture. Acting as both a model repository and a compliance framework Bailo can consistently store models created by LAS, performers, senior design project teams and SCADS. These models would be available for future research, demonstrations and incorporation into LAS efforts. The MDS deploys models stored in Bailo and makes their predictions available via a consistent web API. This ML infrastructure supports broader LAS goals through making models available to a broader audience through greater ease of access and allows LAS projects to utilize resource intensive models in scalable infrastructure. Finally experience in this space generates insight to assist LAS in advising larger ML infrastructure efforts. There will be a simple prototype demonstration at the symposium.

Tim Menzies, Kewen Peng, Suvodeep Majumder

Much intelligence work needs to monitor streams of textual data (email, blogs, Twitter feeds, newspapers, etc). Deep Learning should be the perfect technology for this task. But due to the computational costs and incomprehensibility of its models, this is not the case. However, Deep Learning models can be slow to learn. Also, due to the nature of their learned models, it is impossible to understand/audit the generated models. 

We note that Deep Learning’s problems of CPU cost + poor explanation are tightly linked. Learning and explanation both need to know how decisions are changed by data. Hence, we conjecture that deep learning could stop earlier (and return better explanations) if we apply insights generated from learning to help explanation (and vice versa). 

In summary, the central insight of this research is that explanation can enhance learning (and vice versa). We will explore large-scale incremental text mining problems using LAS data or data from other sources. 

Live Symposium Programming

On December 6, 2022, LAS hosted its annual research symposium at NC State’s Talley Student Union in Raleigh, NC. The program featured a keynote speaker, a conversation with LAS leadership, and presentations on LAS research themes followed by interactive posters and demonstrations.

Featured Remarks

  • Watch: Introduction, Alyson Wilson
  • Watch: Welcome Remarks, Mladen Vouk
  • Watch: Welcome Remarks, Amy Brown Gagnon and Alyson Wilson
  • Watch: Keynote Speaker, Rob Dunn, Senior Vice Provost for University Interdisciplinary Programs and William Neal Reynolds Distinguished Professor, Department of Applied Ecology, NC State University

Theme 1: Building a Diverse, Expert Intelligence Community Workforce

Explore recent developments from LAS-sponsored senior design projects, the LAS Summer Conference on Applied Data Science, and research in human-machine teaming.

Theme 2: Data Science and Machine Learning in Content Triage

Discover the latest projects on data science research – like the Chinese Belt and Road Initiative and the cybersecurity domain – that can be applied directly to military and defense mission strategy, and learn more about how LAS is integrating video, image, speech and text analytics (VISTA) into mission-relevant workflows.

Theme 3: Scaling AI & Machine Learning to Support Next-Generation Platforms

Learn how AI / ML techniques are being used in the intelligence community through knowledge graphs, reducing the amount of labeled data required to develop models, and studying ML methods to build more trust in them. LAS is working toward prototyping components of a ML Ops ecosystem, which can enable more efficient development and use of ML models.

SYMPOSIUM VIDEO PLAYLIST View all

Solutions to National Security at the 8th Annual LAS Research Symposium

Searching hours of footage for the sound of gunshots. Turning spreadsheets into colorful maps to see where a country invests its money. Teaching machines to look for mission-critical information.

Helen Armstrong explains her project “What Happens When Design Students Innovate for Intelligence Analysis.”