2019 Research Symposium
On December 10, 2019, LAS hosted its fifth annual research symposium at the Raleigh Convention Center in Raleigh, NC. The symposium featured a keynote from Bernard Meyerson, chief innovation officer emeritus at IBM, a conversation with LAS leadership, project posters and demonstrations, and panel discussions on LAS research themes like machine learning workflow and integrity, social media and influence, and human/machine collaboration.
Presentations
- Welcome and introduction: Dr. Matthew Schmidt (21:15)
- Keynote Speaker: Dr. Bernard Meyerson (27:20)
- NC State Welcome: Chancellor Randy Woodson (1:47:25)
- LAS Overview: Dr. Alyson Wilson and Mr. Mike Bender (1:53:14)
- Panel: Machine learning workflow and integrity (3:02:20)
- Panel: Social media and influence (4:16:00)
- Panel: Human + machine collaboration (5:49:47)
Projects
We invite you to explore our projects categorized by themes.
- Analytic Integrity for Machine Learning
- Triage and Computational Social Science
- Technology / Tradecraft Transition and Training
- Human-Machine Collaboration
- Advancing Analytic Rigor
Analytic Integrity for Machine Learning
ML Explainability vs. Adversarial ML: If machine learning is more explainable, is it more vulnerable to attack?
Brian Kritzstein, Stephen Shauger, Aaron Wiechmann, Michael Green, Nicole Nichols, Lawrence Phillips, Jeremiah Rounds
The Laboratory for Analytic Sciences (LAS) and the Pacific Northwest National Laboratory (PNNL) hosted a one-week workshop at LAS to consider adversarial attack vectors that may be suggested as a result of machine learning (ML) explainability and interpretability. As most work in the field focuses on the image domain, the workshop chose to address the text domain. Twenty-four individuals across five agencies and from a variety of technical backgrounds attended the workshop. Participants had three days of instruction on the basics of Deep Learning, as well as ML Explainability and Adversarial ML. After instruction, the groups were divided into groups to attack one of two text models. An emphasis was placed on intelligence analysts and machine learning subject matter experts combining expertise to understand potential attack vectors.
Machine Learning Workflow Integrity
Michael Green, Stephen Shauger, Jascha Swisher, Aaron Wiechmann
Machine learning (ML) dramatically improves the ability to identify patterns in large datasets and physical systems. However, the care with which ML pipelines are assembled directly impacts the quality of the solution. Without attention to the characteristics of the underlying datasets, model constraints, and operational environment, ML-based solutions will invariably fail to live up to the expectations of their customers. We discuss considerations for ML workflow integrity and the ramifications for services that support ML, with a focus on labeling, data integrity, and interpretability. We also demonstrate a practical application using several modular components, some originated in-house by LAS and some available open-source, to build an ML workflow that promotes quality and trust in its output and is accessible to multiple classes of ML stakeholders.
Monero Blockchain Analysis and Synthetic Blockchains: Probing Privacy Claims with Machine Learning
Nathan Borggren, Hyoung-yoon Kim, Lihan Yao, Gary Koplik
Monero is a popular crypto-currency which focuses on privacy. The blockchain uses cryptographic techniques to obscure transaction values as well as a “ring confidential transaction” that seeks to hide a real transaction among a variable number of spoofed transactions. We have developed training sets of simulated blockchains of ten and fifty agents of which we have control over the ground truth and keys in order to test these claims. We featurize Monero transactions by characterizing the local structure of the public-facing blockchains and use labels obtained from the simulations to perform machine learning.
Temporal Query Answering on Disparate Antimicrobial Resistance Data
Jing Ao, Rada Chirkova, Jascha Swisher, Michael Green, Tracy Standafer
In this poster, we discuss the problem of answering user-specified temporal queries on disparate data for the Antimicrobial Resistance (AMR) application domain. Our work is motivated by observing salient properties in the AMR data and the AMR queries, from which we derive the major objectives to achieve for answering the queries and propose our solutions. Potential challenges that we encountered in the research work are also discussed in this poster.
LoadWolf – CS Senior Design 2019
Austen Adler, Bryan Arrington, Chris Benfante, Cameron Hetzler, Sean Lynch, Aaron Wiechmann, Stephen Shauger, Stephen Williamson
The LAS annually sponsors a group of four NC State University Computer Science seniors on their “Senior Design” course project. This year, the team investigated whether a cloud system analytic scheduling algorithm could be created that processes all users’ analytic submissions while optimizing the system-wide time-varying resource demands to meet the system owner’s custom desires. Several prototype algorithms were created and tested, and compared favorably against a random submission procedure.
Machine Learning Interpretability to Aid Structured Analytic Tradecraft
Evelyn Fox, Cordis Carter
This research effort focused on introducing machine learning interpretability techniques to aid structured analytic tradecraft. Analysts are hesitant to use complex mathematical models in their analysis due to difficulty in interpreting and communicating the results. A hands-on workshop was conducted introducing various model interpretability techniques to understand (1) how analysts might employ model interpretability methods and (2) increase confidence in communicating complex modeling results. A scenario and machine learning model pipeline were developed with two publicly available datasets (The GDELT Project and the World Food Programme) to provide the analysts a relevant problem set that they may be tasked with in their current workflows. Analysts with more technical backgrounds, such as in math or computer science, found interpretability techniques applied throughout the entire data analytics lifecycle as critical in the analysis process; analysts with limited mathematical knowledge struggled to understand how a variety of interpretability techniques could support or contradict their analysis. Despite the variation in backgrounds, common feedback among the analysts was centered around the lack of understanding as to what different types of models do, how they are used, and what types of data are required with the ultimate agreement that any rapid adoption of advanced analytics is more likely in an area where there is low risk of an incorrect application of advanced analytics.
Image Authentication
Joe Littell, Jascha Swisher, Aaron Wiechmann
Advances in generative adversarial networks (GANs) have allowed for the creation of highly realistic synthetic images, particularly human portraits. While these techniques have found applications in Hollywood and comedy routines, these images have already been used in publicly-reported influence campaigns, and could potentially also see use in counterfeiting, espionage, blackmail, and other nefarious activities. Experimenting with GANs is essential to scope potential impact, particularly by exploring human capability to correctly recognize authentic images, and how automated techniques might best be deployed to assist a human analyst.
Automated Report Generation utilizing Topic Flow and Anomaly Detection
Colin Potts, Aaron Wiechmann, Sean Lynch, Tracy Standafer
We present an architecture and implementation for the automatic analysis of large email corpora. The system produces various analysis artifacts and corresponding visualizations. This information is then synthesized into a textual report by a context-aware generator. The generative component attempts to communicate the patterns and anomalies present in the dataset through a storytelling paradigm. The system utilizes doc2vec for topic analysis along with network graphs, clustering, and communicative intent categorization. By combining these components we create a topic flow graph that shows how information flows and evolves over time. We further model each individual and cluster of communicants for their unique biases in the dataset with regards to all these factors. This allows us to monitor and measure individual emails and chains with a quantitative measure of anomalous behavior, both with respect to the people involved and the communicative groups to which they belong.
Retention Decisions Based On Data Usage
Stephen Williamson
In 2017, Harvard Business Review estimated that Walmart collected more than 2.5 petabytes of data every hour from its customer transactions (about 50 million filing cabinets’ worth of text every hour). As the amount of data being produced grows, decisions will need to be made on what data to keep. This project investigates one approach to improve the usefulness of the data chosen to be retained.
How Safe is this Conclusion?
Suvodeep Majumder, Tim Menzies
Nothing is permanent, except change. Conclusions made yesterday may now have become overcome by news events. Ideas drift in time, or in space, thus making those conclusions unsafe for future use. In our day to day operations we can see both Temporal shifts (models learned by data mining may work well within their training data, but may offer erroneous conclusions on tomorrow’s data) as well as Context shift (models that worked “there” may not work “here”; i.e., models learned by data in one context may be irrelevant or misleading in another). In today’s situation where Terabytes of data arriving every moment, how do we know a contextual shift has happened and how do we mitigate these changes to effectively reduce the impact on models? In this research we study such changes (both temporal and contextual) to identify how they affect the model’s performance. These results help us to understand and create an effective framework to mitigate these data shifts. In this research, we explored this effect in network intrusion detection with 19 different types of network attacks which the initial models have not seen before. We build a hierarchical anomaly detector with model certifier and fast local retraining to identify a new shift in data (i.e., anomaly).
Triage and Computational Social Science
Defining Triage for the Intelligence Community
Judith Johnston
View Poster
To support research at LAS, the Computational Social Science/Triage team undertook the task of identifying the critical elements of an operational definition of triage. A multi-dimensional review of relevant research was conducted which included the study of triage in academic disciplines of
psychology, education, medicine and nursing, social work, library science, and research methods. We reviewed findings on major cognitive processes that are associated with triage, specifically: concept formation, categorization/ classification/ sorting/ prioritization, and decision-making. We identified tools and techniques used in the act and process of triage, evaluation methods, and contexts in which they are used. We used these data to map the dimensions of the act and process of triage.
Computational Social Science and Triage
Sandra Harrell-Cook, Rob Johnston, Felecia Vega, John Slankas, Judy Johnston
The need for the timely identification, analysis, and interpretation of “big data” is becoming more crucial in identifying information of value. Strategies are needed to explore, query, filter, and prioritize this barrage of massive data sets, thus responding to the analyst information needs. For 2019, the Computational Social Science and Triage effort investigated triage processes, methodologies, and tradecrafts through the topics of Influence and Radicalization to create decision advantages for Intelligence Community.
The Automatic Characterization of Possible Radicalized Individuals
Felecia Vega
Analysts continue to come across ever-increasing amounts of data in a wider variety of forms, which increases the complexity of sorting and identifying relevant information. This project demonstrates information triaging methodology, an approach geared towards exploring, filtering, and prioritizing information to identify potentially radicalized individuals. In addition, this project brings together the expertise of scholars within NC State’s Psychology Department, technologists from industry, and LAS researchers and analysts to develop repeatable and scalable triaging
methods to characterize potentially radicalized individuals in large volumes of structured and semi-structured data.
Latent Class Analysis of Extremist Actors Radicalized in the United States
Sarah Desmarais, Christine Brugh, Samantha Zottola, Joseph Simons-Rudolph, Samantha
Cacace
Most prior research has treated violent extremist actors and acts as homogenous. However, some evidence suggests heterogeneity in actors and their actions. We sought to increase knowledge regarding violent extremist actors in the U.S. by: 1) identifying groups of actors based upon their plot characteristics; and 2) describing characteristics of actors across these groups. Data were drawn from Profiles of Individual Radicalization in the United States. Latent class analysis revealed 3 groups of U.S. extremist actors distinguished by the targets, methods, and anticipated fatalities of their plots. Across these groups, there were significant differences in age, race/ethnicity, citizenship, and ideology.
Country and Individual Characteristics Related to Terrorism Involvement
Samantha Zottola, Christine Brugh, Sarah Desmarais, Joseph Simons-Rudolph
The determinants of terrorist behavior occur within the context of the countries in which individuals reside. Yet, few studies have examined terrorist involvement with methods that account for the hierarchical nature of the data. We used data from the Western Jihadism Project and the Fund for Peace (Fragile States Index) to examine the relationship between individual-level and country-level characteristics in their impact on terrorist involvement. Multi-level modeling allowed for the nesting of individuals within countries. Results suggest that residency status and criminal history have a different influence on terrorism involvement depending on various indicators of country stability. This has important implications for risk assessment, namely, that it may be country dependent.
Applying the TRAP-18 to Lone Actors
Christine Brugh, Sarah Desmarais, Joseph Simons-Rudolph, Alexa Katon, Samantha
Zottola
The TRAP-18 is an investigative framework to identify those at risk of lone actor terrorism. Using publicly available information on 77 jihadism-inspired lone actors, we rated TRAP-18 items and compared item prevalence across U.S. and European subsamples. Results reveal challenges in completing the TRAP-18 using publicly available information on this population: only four of 18 items were rated as present more often than they were rated absent or unknown. In contrast, two-thirds of the items were more often rated unknown than present or absent. Findings show some, but not many, differences in item ratings between U.S. and European lone actors.
Disaggregating Islamic State Terrorists
Eva McKinsey, Joseph Simons-Rudolph, Christine Brugh, Sarah Desmarais, SamanthaZottola, Peyton Frye
Previous research has indicated there are distinctions between terrorists based on the roles they assume and the types of actions they carry out in those roles (Gill & Young, 2011; Knight, Keatley, & Woodward, 2019; Simcox & Dyer, 2013); however, empirical research on this topic is limited (Desmarais et al., 2017). An analysis of organization roles within the Islamic State group is
particularly important considering their complex organizational structure—a factor often credited with the group’s success (al-Tamimi, 2019; Stern, 2019). In this study, we used data from the Western Jihadism Project (Klausen, 2019) to examine and compare characteristics of individuals linked to IS as loose affiliates, active members, operational facilitators, and leaders. Results revealed significant differences between roles in demographic characteristics, criminal history, terrorism involvement, and foreign fighting.
Antecedent Behaviors Among Lone Actor Terrorists With & Without Mental Illness
Alexa Katon, Samantha Zottola, Christine Brugh, Sarah Desmarais
Evidence suggests that lone actor terrorists with and without mental illness differ on both demographic and behavioral factors (Corner & Gill, 2012). However, as existing work comparing lone actors with and without mental illness focuses on samples who endorse dissimilar ideologies (Gill, Horgan, & Deckert, 2014), it remains unclear if lone actors who endorse the same ideology show differences in behavioral factors according to their mental health status. The purpose of this study was to examine differences in behavioral factors between lone actor terrorists with and without mental illness using a sample of 79 individuals who endorse jihadist ideology. Chi-square analyses were conducted for 19 antecedent behavioral factors to compare the prevalence of demographic and behavioral factors between lone actors with and without mental illness. Findings revealed that lone actors with mental illness had military experience significantly more often than those without, as well as several other meaningful differences between the two groups of lone actors. In light of these findings, creating separate risk assessments and strategies may not be necessary for targeting jihadism-inspired lone actor terrorists with and without mental illness.
Identifying and Countering Social Media Influence Campaigns
Rob Johnston
Adversaries are utilizing social media to manipulate individual, group, and nation state behavior. This research is aimed at early identification, management, and countering of those campaigns through the use of machine learning, artificial intelligence, and social science.
Social Sifter
Brent Younce, Clint Watts, Rob Johnston
As highlighted in major public incidents such as Russia’s efforts to influence the outcome of the recent presidential elections in the U.S., Germany, and France, it has become clear that social media has become an optimal target for malicious individuals and institutions to perform organized influence campaigns. The Social Sifter project aims to develop a centralized interface to assist analysts seeking to detect, monitor, and assess foreign state-sponsored social media influence campaigns in several ways. First, the system centralizes capabilities for triaging large amounts of social media data into small networks of key influential accounts. Secondly, it integrates a variety of machine learning models (each built on distinct features of this data) which aim to automatically identify coordinated, state sponsored influence campaigns. Finally, this information is presented visually in a single web interface which aims to help analysts identify and understand these campaigns.
Identifying Biases in the Social Sifter Dataset
Mitchell Plyler, Brent Younce
We explore methods for identifying source bias in the Social Sifter dataset. Social Sifter is a LAS effort to identify foreign influence campaigns on social media. Deep learning models are used to identify online users as part of a foreign influence campaign (inorganic) or not (organic). We identify text-based biases in the training and testing datasets that train high scoring models but are unlikely to generalize to ‘wild’ data. Biases are identified using both conventional data mining efforts and explainable deep learning models. After identifying and mitigating these biases, we reduce the measured performance of our models while qualitatively increasing our trust in these measures.
Classification of Social Media Users using a Generalized Functional Analysis
Anthony Weishampel, Ana-Maria Staicu, William Rand
Social media provides a far-reaching platform for influencers to spread their beliefs. Brands’ social media and internet word of mouth presences are susceptible to malicious social media users. The ability to detect various types of users is vital to mitigating their effects. A new functional data analysis approach is applied to classify social media users based on the user’s online behavior. We consider a generalized multilevel functional model for the user’s response profile, where the response profile is a binary time series indicating times when the user is active. The model separates the user-specific variation from the day within user variation and from the mean trend. By expanding the user-specific variation, coefficients are extracted that can detect automated users and identify users who are working in cohort. These methods are implemented on Social Sifter.
Forecasting State Instability: South East Asia
Arnab Chakraborty, Tuhin Majumder, Andrew Shaw, Soumendra Lahiri, Rob Johnston, William Boettcher, Sandra Harrell-Cook
We develop statistical machine learning methods to predict number of CAMEO coded events associated with state instability. The analysis is carried out for 12 South East Asian countries. For each country, we identify important covariates that are most informative and relevant to the forecasting task.
Veracity: Data Analytics Models for Social Media Reliability
Héctor Rendón, Andrew Hollis, Alyson Wilson, Jared Stegall
Seven in every ten Americans use some form of social media for a variety of motives ranging from communicating with others to reading and sharing news and information (Pew Research Center, 2019). As social media can be used for immediate and newsworthy information, these technologies have also been found to be efficient platforms for misinformation dissemination (Shin et al., 2018). The objective of this study was to explore data analytics models that would be helpful to assess the reliability of specific social media accounts.
Forecasting State Stability in Southeast Asia
William Boettcher, Rob Johnston, Luis Esteves, Samantha Schultz, Katelyn Houston, Mariana Castro-Arroyo, Margaret Harney
Scholars and intelligence analysts studying state instability and/or failure often rely on large-n statistical analyses of time-series datasets. These quantitative studies have several advantages in terms of reliability and internal validity; they also can cover long periods of time and support probabilistic forecasts regarding future events. Unfortunately, these studies are often limited by choices made by other researchers long ago and by coding decisions that lead to over/under-counting of events or by mischaracterization of outcomes and impacts. Qualitative case studies can ameliorate these deficiencies by focusing on a limited number of cases, identifying relevant independent and dependent variables, and illuminating causal processes. This poster reports the results of just such a qualitative project, conducted in parallel with the development of a large-n machine-learning model reported elsewhere. Here we focus on Burma, Thailand, and the Philippines, complementing the quantitative analysis of a broader set of Southeast Asian countries. Our results support the external validity of the quantitative model and highlight particular variables of interest for further analysis.
Open ABM: Improving Reliability and Accessibility of Agent-Based Models
Conor Artman, Eric Laber, Rob Johnston
Modeling the complexity of human decision making and group behavior is inherently an interdisciplinary and difficult task. Understanding and describing human behavior crosses multiple disciplines from anthropology, cognitive science, economics, political science, psychology, and other social sciences. Modeling those behaviors requires computational statistics, computer science, and mathematics. A large part of undertaking a task like this is developing a core interdisciplinary team that can speak and understand each other’s professional jargon and come to a shared mental model of the work and a common language for the shared effort. Developing a simulation platform for a highly complex modeling environment to accommodate the scale of human behavior and decision-making led the team to the creation of an OpenABM gym. In tandem, we developed methodology for multi-agent reinforcement learning to automatically optimize agent behavior.
Statistical Inference in Adaptive Experimental Design
Zhen Li, Eric Laber
With the increasing interest in personalization in the recent years, there have been important applications of linear contextual bandits in a variety of fields, including recommender system, mobile health, and dynamic pricing. Despite the surge of study on linear contextual bandit problems, the major research focus is on regret analysis where confidence intervals for parameters indexing the bandits are derived as a byproduct. Those confidence intervals are often too conservative to be useful for statistical inference. We propose a statistical inference framework by showing the consistency and deriving valid confidence intervals for parameters in linear contextual bandits under several common sampling algorithms. Simulation experiments show that the proposed confidence intervals have the correct coverage probability. The theoretical results will come in handy for practitioners to use in various applications.
Life After Linkage: Tackling the Downstream Task with Error Propagation
Andee Kaplan, Brenda Betancourt, Rebecca C. Steorts
Record linkage (entity resolution or de-duplication) is the process of merging noisy databases to remove duplicate entities that often lack a unique identifier. Linking data from multiple databases increases both the size and scope of a dataset, enabling post-processing tasks such as linear regression or capture-recapture to be performed. Any inferential or predictive task performed after linkage can be considered as the downstream task. While recent advances have been made to improve flexibility and accuracy of record linkage, there are limitations in the downstream task due to the passage of errors through this two-step process. We present a generalized framework for creating a canonical dataset post-record linkage for the downstream task, called canonicalization.
Analyzing Influence of Social Media Through Twitter
Dhrubajyoti Ghosh, James Robertson, Soumen Lahiri, Rob Johnston, William Boettcher, Michele Kolb
We develop statistical methodology to predict outcomes of some recent elections, thereby demonstrating forecasting potential of social media data.
Technology / Tradecraft Transition and Training
Technology Transfer Best Practices
Greta Bock, Eli Typhina, Jason Mooberry
The Technology Transition Tradecraft and Training (T4) Team at the Laboratory for Analytic Sciences (LAS) performed a review of academic and industry literature for best practices concerning transitions and experimental prototype development. The areas of focus were Industry-to-Government and Government-to-Government technology transition. The team also
included a North Carolina State University (NCSU) postdoc who researched cross-sector partnerships as well as Academic-to-Industry and LAS-to-Government technology transfer arrangements. A review of 180 research papers/sources revealed that “successful technology transfer depends on the way you organize and communicate with people.”
Research Transition
Greta Bock, Eli Typhina, Jason Mooberry
The Technology Transition Tradecraft and Training (T4) Team at the Laboratory for Analytic Sciences (LAS) performed a review of academic and industry literature for best practices concerning transitions and experimental prototype development. The areas of focus were Industry-to-Government and Government-to-Government technology transition. The team also
included a North Carolina State University (NCSU) postdoc who researched cross-sector partnerships as well as Academic-to-Industry and LAS-to-Government technology transfer arrangements. A review of 180 research papers/sources revealed that “successful technology transfer depends on the way you organize and communicate with people.”
The Design Thinking Training, Support, and Investigations for LAS/IC Projects
Sharon Joines, Byungsoo Kim, Hongyang Liu, Catalina Salamanca
The goals of Design Thinking Training, Support, and Investigations for LAS/IC projects are: 1) to provide training in design thinking by delivering a refined “Design Thinking Through Design Research” advanced workshops and 2) to leverage design thinking to benefit collaborative LAS/IC projects by facilitating mini-sessions for LAS projects and members. Two advanced workshops were delivered focusing on utilizing selected design thinking and research methods and tools. Participants worked on their own LAS project or datasets generated by the Design Team for the activities of the workshops. Two design thinking facilitated sessions were delivered to help LAS members with concept mapping and action planning for their LAS projects/activities.
Collaboration Spotlight: Reserve Forces
Jody Coward, Jascha Swisher, Jeremy Seteroff, Thomas Booth
During 2019, LAS engaged in a collaboration partnership with the Reserve Forces. We identified mission focused tasks that could be completed by the military reservists on their drill weekend or during their required two week training period.
Enacting Immersive Collaboration: A Study of Training Transfer
Jessica Jameson, Mariza Marrero, Carmen Vazquez, Haddon Mackie, Charles Belo, Dustin Harris, Nolan Speicher
Baldwin and Ford’s (1988) seminal work on training transfer defined transfer as both how the learned knowledge or skills may be applied in settings other than those experienced in training and how changes from those learned skills are maintained over time. There is a dearth of effective training assessment programs because organizations do not devote sufficient resources to evaluation (Salas & Cannon-Bowers, 2001). When trainers or scholars have conducted an assessment of training transfer, they are typically limited to self-report surveys approximately a year after the training event (Nikandrou, Brinia, & Bereri, 2009). In this project we assessed transfer of skills learned in our LAS collaboration workshop through a combination of surveys and interviews with workshop participants. We learned about the features of training that are most effective in terms of perceptions of the relevance of the learned skills and their usefulness at LAS and in other contexts. This project has implications for improving the collaboration workshop specifically, as well as training features that can optimize transfer for other types of training.
Team Performance and Collaboration
Carmen Vazquez, Mariza Marrero, Jessica Jameson
Our 2018 research highlighted the complexities of working in a diverse and inclusive environment and the complexities that affect performance. In 2019, our research honed in on communication skills and team culture as elements that can drive the success or underscore the shortfalls of a team. Our research was underpinned by the assumption that if an organization/team is aware of their culture and their communication style then that organization/team is best positioned to improve their performance. To a lesser degree the role of the leader as a changing agent for improved team performance was also examined. Three teams, “LAS Staff,” “Military Reservists,” and “Technical Directors,” completed communication, culture, and personality surveys available to the general public. The results were discussed with each group with a focus on performance improvement, given their individual and team results in each of the three dimensions (communication, culture, personality).
LAS Unclassified Infrastructure
John Slankas, Jascha Swisher, Andrew Shaw
In 2019, LAS migrated their unclassified infrastructure from on-campus, privately managed cloud to Amazon Web Services (AWS). This poster discusses the primary components, the advantages gained through using AWS, and the challenges involved. The poster also highlights some of the
significant usage of the infrastructure over the past year. Finally, the poster presents “AWS Commander,” a tool developed by LAS to provide increased usability for common services, user management, budgeting, compliance, and security. AWS Commander also provides virtual development and data science environments.
WESTWOLF 2019 Use Cases
Paul Elder, Michele Kolb, James Smith, Ruth Tayloe, Demetrius Green, Steven Paul, Sarah Schaible, Brent Younce, Sam Wilgus, Mark Wilson
This year marks the culmination of LAS efforts on WESTWOLF. Four use cases are outlined in our poster, which accompanies a live demonstration of the automated prototype developed in 2019. WESTWOLF enables mission leaders to identify and act on key levers to guide their work groups to better mission performance, using data and visualizations to show where action can have the biggest impact. The mission leader stays in the driver’s seat, but now has unique data about group work activities and mission performance to enable decision-making.
Human-Machine Collaboration
Human Machine Collaboration
Sarah Margaret Tulloss, Ken Thompson, Kirk Mancinik, Alexis Sparko, Paul Davis, James Smith, Carlos Gaztambide, Ryan Bock, Courtney Sheldon, Michael Green, Jascha Swisher
In 2019, the LAS Human Collaboration team utilized the following definition to frame the research and work for the year. Human-Machine Collaboration is when humans and machines work together in a way where the machine assists the human without the human having to explicitly tell the machine what to do. The machine would be able to monitor what the human is doing, be able to identify what they’re doing, predict what they’re going to do, and, if a machine can help with the human’s task, it automatically does so. This year’s work was framed by documented analyst challenges and the analyst information discovery process. The goals were to further document these processes, continue to identify analysts’ pain points, and identify when a machine intervention could alleviate those pain points. Areas of investigation for machine interventions included visualization, understanding persuasiveness, tradecraft measurement, the use of augmented reality in analysis, and workflow documentation and representation.
A Slice of What Analysts Do, and How Human-Machine Collaboration Can Help
Kenneth Thompson, James Smith, Michael Green
The “information discovery” process is one of the most complex and critical analysis tasks. It requires analysts to identify and decompose customer requirements, convert them into analyst tasks, and use a myriad of tools, analytics, and workflows, to find “golden nuggets” of information that will satisfy national security needs. Analysts encounter numerous challenges in this process, and we believe that Human-Machine Collaboration interventions can alleviate, or possibly eliminate, some of these challenges. The purpose of this research is to understand the information discovery workflow, identify its pain-points, and identify, develop, or propose interventions that may alleviate those pain-points.
Data Analysts and Their Software Practices: A Profile of the Sabermetrics Community and Beyond
Justin Middleton, Kathryn Stolee
Data analysts adopt practices from software engineering to discover insight in data. However, analysts must reinterpret software skills through the unique pressures of their work, requiring the balance of other skills like statistics and data curation. Therefore, we ask: how does a community learn and practice software development contextualized as one of many data analytic skills? To
answer this, we profile the community around baseball analytics, or sabermetrics, as a focused subcommunity of data analytics. To describe how it adapts software engineering practices to search for robust statistical insight in sports, we interview 10 participants in the sabermetric community and survey over 120 more data analysts, both in baseball and not. Through these methods, we explore how their work lives at the intersection of science and entertainment, and as a consequence, baseball data serves as an accessible yet deep subject to practice analytic skills. It also encourages a restless research process that rapidly moves analysts between defining rigorous statistical methods and preserving the flexibility to chase interesting problems. In this question-driven process, members of the community inhabit several overlapping roles of intentional work, ranging from emphases on software development to methods to data, sometimes fulfilling all of them as independent researchers or specializing on teams, and we discuss the way that the community can foster the balance of these skills.
Understanding Individual Perceptions of Persuasiveness
Zhen Guo, Munindar Singh
Developing persuasive machine teammates for human-machine collaboration needs an understanding of how people change their views. However, persuasion is difficult to model because it involves cumulative influence, language factors, and other confounding factors. We developed a machine learning model to capture the confounding factors and conducted a human study to evaluate our findings. By comparing ratings from third-party raters and predictions from our model to view changes indicated by opinion holders, it shows that our model captures individual perceptions of persuasiveness and performs better than humans on predicting view changes.
miniatureWAFFLE: Data Viz For All
Alexis Sparko
Creating a good (compelling, insightful, intuitive) data visualization is deceptively difficult. Oftentimes, the process is one of trial and error, even for seasoned data analysts. For those without formal analytical training and computer scripting skills, the process is even more challenging. We present our work investigating ways a machine can help a user create a good visualization and demo our prototype web application designed to make good data viz accessible to all. This work includes investigations of visualization recommenders, automated data transformations, incorporation of open-source-software, and thoughtful interface design.
Uncovering Personality Differences in Exploratory Visual Analytics Tasks
R. Jordan Crouser, Brent Younce, Kendra Swanson, Alvitta Ottley
Over the last decade, a user’s personality and cognitive ability has been found to have substantial impact on task performance and usage patterns with various visualization designs. This has led to a gradual recognition of the deficiency of having only one-size-fits-all data visualization interfaces, as well as the importance of understanding the user herself. Ultimately, this body of work indicates that an incompatible visualization can hamper productivity. Here we document recent efforts towards understanding how the invariant features of the user correlate with interactions using the features of an analytic tool. Participants were first asked to complete a series of standard assessments pertaining to personality and cognitive ability. They were then presented with a large, multimodal, synthetic dataset, and were asked to explore this dataset using a custom instrumented interactive search and visualization tool. We find that participants with a more Internal Locus of Control (LOC) performed substantially more search/filtering and drill-down operations than their External LOC peers, and that participants on the more External end of the LOC scale tended to spend less time exploring before beginning to document their findings. This project served as an initial pilot to explore the feasibility and utility of conducting experiments on the role of personality in data exploration under longer and more realistic conditions. The work will continue in 2020.
Improving Analytic Experiences Through Virtual Environments
Paul Davis, Patti Kenney, Kirk Mancinik, Courtney Sheldon, James Smith, Ken Thompson, Sarah Margaret Tullos
This project focused on exploring the potential of virtual environments, namely augmented reality (AR), mixed reality (MR), and virtual reality (VR), in analysis of data. Do these technologies give Analysts new approaches in their work that improve results? The AR/VR group invited Analysts, Computer Scientists, and Data Scientists as well as NC State professors in Computer Science and science fiction literature to participate in a brainstorming session. The result was discovery of several research areas with associated use cases upon which future research may draw upon. Concurrent with this was research into biometrics of eye gaze. Additional experiments explored document triage that incorporated machine learning. This year the AR/VR group produced a prototype about how to use MR that combined graphical presentation data with machine learning. This first step was to display data points color-coded by cluster and suspended above a city map to represent their geo-locations over time. For 2020, the group anticipates continued work regarding screen panels, display of information in MR as backed by machine learning, and possibly document triage.
Augmented Reality Interface for Improving Analyst Interaction with Machine Learning Assistants
Teresa Hong, Benjamin Watson, Kenneth Thompson, Sarah Margaret Tulloss, Courtney Sheldon, Paul Davis
Intelligence Analysis is becoming more challenging as data grows, machine learning automation may help. However, current machine learning systems have tedious interaction. In this project, we are building an interactive Augmented Reality front-end interface to help analysts communicate with back-end machine learning assistants.
Measuring Tradecraft Engagement
Ryan Bock, Kirk Mancinik, Stephen Williamson, Skip Smith
A sine qua non of Digital Do It Yourself (DDIY) culture is the creation and sharing of re-usable knowledge artifacts such as executable code, microvideos, and ‘how-to’ content. Tradecraft documentation is a crucial type of knowledge artifact within the analyst DDIY culture. But, as the volume of this documentation has grown and accumulated over time, it has become difficult for analysts to find specific tradecraft articles that may be most relevant and useful to them. One way of approaching this problem is to rank-order tradecraft articles based upon the extent to which specific types and sub-groups of analysts have engaged and interacted with them over time. With this problem and solution concept in mind, LAS undertook an effort in 2019 to devise, calculate, prototype, and present meaningful measurements of analyst engagement with tradecraft articles.
Belief and Intention Recognition using Narrative Planning
Rachelyn Farrell, Stephen Ware
Given many agents in a virtual environment, each with their own beliefs and goals, a narrative planner decides what actions they will take. In this project, we studied the opposite problem of belief and intention recognition: given some actions that we know happened, can we find a reasonable set of beliefs and goals for all agents that explain why they did what they did. We evaluated this approach on an existing data set and in a game where players must infer the beliefs and goals of employees in a small company to explain an insider attack.
Rule-Based Cognitive Modeling via Human-Computer Interaction
Abhijeet Krishnan, Chris Martens
Mental models are a form of knowledge that allows humans to simulate systems outside of themselves, and that can predict and explain the behavior of those systems in terms of causal relationships between events. This research attempts to formalize the cognitive process of mental model formation through an action model learning (AML) system, which learns precondition-effect operators based on a sequence of interactions between a human and computer system. We apply AML to a simple puzzle game and explore future applications to intelligent user interfaces.
Advancing Analytic Rigor
Towards the Application of Anticipatory Thinking in Support of Risk Identification
Michael Geden, Andy Smith, Randall Spain, Richard Wagner, Jing Feng, James Lester
Risk management is a critical process for organizations to manage and navigate environments that are uncertain, complex, and dynamic. The first step of the risk management process is risk identification, which has the goal of identifying a diverse space of specific and relevant potential risks. Despite the central role of risk identification in the risk management process, limited work has investigated cognitive processes in risk management. This poster conceptualizes risk identification as a type of anticipatory thinking, which is the process by which we imagine alternative states of the world. It explores how three anticipatory thinking metrics (novelty, specificity, and diversity) can be used to assess risk identification.
Combining Anticipatory Thinking and Enterprise Risk Management in Scenario Explorer
Christopher Argenta, Adam Amos-Binks, Abigail Browning, Matthew Lyle
This poster details Applied Research Associates’ progress bringing together concepts of enterprise risk management and anticipatory thinking by extending the Scenario Explorer platform.
User-Guided Keyword Query Expansion
Wil Corvey, John Slankas, Felecia Vega, Sheila Bent, Peter Merrill, Jascha Swisher, Aaron Wiechmann
Research requires access to both a breadth and depth of relevant documents. Even with adequate domain knowledge and a native speaker’s command of the document language, it can be difficult to know what to enter into a search engine in order to retrieve optimal results online. Searching becomes much more difficult when either the domain or document language are unfamiliar. Due to the fact that various meanings of a polysemous term may translate into different words, simply applying machine translation to a query term can yield confusing or spurious results. Additional technology is required to provide precise, understandable queries, regardless of the document language. User-guided query expansion provides a mechanism to augment search processes by ‘imagining,’ from a large semantic model, additional query terms in a variety of languages. Because expansion is performed relative to a particular definition of a query term, which users can select, expansions are designed to be relevant to the exact query term meaning. Expansion terms are then matched to definitions themselves, providing additional context in case the term or language is unfamiliar to the user. Finally, when the user accepts or rejects an expansion, this feedback is learned during model retraining, producing a query expansion engine with increasingly human-like performance.
LAS Explorer: Connecting People and Projects
Andrew Shaw, Andrew Crerar, Eli Typhina
LAS Explorer is a software tool that allows users to search the significant body of LAS work and see the relationships between people and projects and how projects have evolved over time. LAS Explorer emerged from user experience research conducted with nearly 50 LAS collaborators representing government, academia, industry, and NC State staff over the course of two years.
Assessing Analytic Techniques in the IC
Sue Mi Kim, Michele Kolb, Vince Streiff, Christine Brugh, Sharon Joines, Byungsoo Kim, Hongyang Liu, Catalina Salamanca
Since the perceived intelligence failures of 9/11 and invasion of Iraq in the early 2000s, the Intelligence Community (IC) has seen a push for more rigorous standards of intelligence analysis, including a mandate to employ reasoning techniques and practical mechanisms that reveal and mitigate bias (ICD-203). Structured Analytic Techniques (SATs) as developed by Heuer and
Pherson have long been taught to IC analysts for this purpose. And yet, the claim that use of SATs improves analysis in the IC has not been rigorously tested. The 2019 LAS research focus of this project was to assess the efficacy of SATs in improving analysis in the IC. In collaboration with NC State partners, design thinking principles were creatively applied to develop a prototype experiment that would provide empirical data to answer the research question. Practical applications of insights gained from such an experiment could potentially influence IC-wide training of analysts and development of an analytic environment that can improve analyst workflows and products by providing tools that encourage collaborative and transparent analytic processes, handle large volumes of data, and organize analysis into understandable visual representations.
Supporting Procedural Knowledge Search
Jaime Arguello, Rob Capra, Sarah Casteel, Charles Higgins
Employees, analysts, and service members across a range of roles have needs to find and use procedural knowledge as part of their work. Procedural knowledge refers to knowledge about how to do things, including steps, procedures, methods, and algorithms. Many organizations maintain knowledge bases of procedural knowledge with documents that are contributed and shared among employees. However, finding and searching for procedural knowledge is not well-supported by existing search systems and can be difficult for users. For example, users may be unsure of what the procedure involves, background information needed, or vocabulary relevant to the procedural knowledge. In this research project, we aim to (1) understand how people search for procedural knowledge, and (2) developing algorithms and tools to help support people searching for procedural knowledge. To address goal #1, we are conducting two studies: (a) an semi-structured interview study with representative users who have needs to find procedural knowledge, and (b) a lab-based user study to investigate how users search for procedural knowledge involving different levels of complexity. Both these studies are currently in the planning and pilot testing stage. To address goal #2, we plan to leverage the insights gained in goal #1 to develop tools to identify and extract information about the structure and relationships present in procedural knowledge documents. We also plan to design and develop specific tools to support searches for procedural knowledge. We anticipate designing interfaces based on several approaches, including static facets, dynamic facets, and procedural similarity.
BEAST – Analytic Rigor at Big Data Scales and Speeds
William Elm
The intent of the BEAST program was to develop game-changing technology-enabled tradecraft to support the fundamental cognitive work of analytic decision-making. Just as prior work on structured analytic tradecraft attempted to improve analytic quality, the BEAST program effort reprised that effort with the addition of leading practices in Cognitive Systems Engineering (CSE) to inform innovations that were experimentally verified for transition to the Enterprise. The BEAST program tackled the long-standing problems of intelligence analysts: a complex and cognitively challenging work domain rife with incomplete information, intentional deception and information uncertainty, time pressures, and data overload. Analysts adopt numerous adaptations to cope with these pressures, often at the cost of introducing brittleness and cognitive friction. The challenge for the BEAST program was to identify the proven patterns of support to enable near-expert decision-making performance under these conditions. The current BEAST prototype is an integration platform transitioned to the High Side which currently supports 44 tradecraft concepts and associated technologies. Combined, these concepts result in a decision-centered analyst environment that finally makes achieving true analytic rigor practical at real world scales and complexities. During analyst engagements in 2018, analysts successfully engaged BEAST on real world mission problems, for several analysts this experience changed the course of their analyses. BEAST demonstrated its ability to revolutionize analysis so well that many attendees expressed their intent to continue their analyses in BEAST after returning to their home offices. Throughout the years of 2018 and 2019, BEAST has seen steady use from a set of true early adopters who are using BEAST for their current mission problems and charting a course for an entirely new Enterprise tradecraft for the future.