Causal Model Extraction from Attack Trees to Attribute Malicious Insider Attacks

Amjad Ibrahim, Simon Rehwald, Antoine Scemama, Florian Andres, Alexander Pretschner

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

In the context of insiders, preventive security measures have a high likelihood of failing because insiders ought to have sufficient privileges to perform their jobs. Instead, in this paper, we propose to treat the insider threat by a detective measure that holds an insider accountable in case of violations. However, to enable accountability, we need to create causal models that support reasoning about the causality of a violation. Current security models (e.g., attack trees) do not allow that. Still, they are a useful source for creating causal models. In this paper, we discuss the value added by causal models in the security context. Then, we capture the interaction between attack trees and causal models by proposing an automated approach to extract the latter from the former. Our approach considers insider-specific attack classes such as collusion attacks and causal-model-specific properties like preemption relations. We present an evaluation of the resulting causal models’ validity and effectiveness, in addition to the efficiency of the extraction process.

Original languageEnglish
Title of host publicationGraphical Models for Security - 7th International Workshop, GraMSec 2020, Revised Selected Papers
EditorsHarley Eades III, Olga Gadyatskaya
PublisherSpringer Science and Business Media Deutschland GmbH
Pages3-23
Number of pages21
ISBN (Print)9783030622299
DOIs
StatePublished - 2020
Event7th International Workshop on Graphical Models for Security, GramSec 2020 - Boston, United States
Duration: 22 Jun 202022 Jun 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12419 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference7th International Workshop on Graphical Models for Security, GramSec 2020
Country/TerritoryUnited States
CityBoston
Period22/06/2022/06/20

Fingerprint

Dive into the research topics of 'Causal Model Extraction from Attack Trees to Attribute Malicious Insider Attacks'. Together they form a unique fingerprint.

Cite this