TY - GEN
T1 - Causal Model Extraction from Attack Trees to Attribute Malicious Insider Attacks
AU - Ibrahim, Amjad
AU - Rehwald, Simon
AU - Scemama, Antoine
AU - Andres, Florian
AU - Pretschner, Alexander
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - In the context of insiders, preventive security measures have a high likelihood of failing because insiders ought to have sufficient privileges to perform their jobs. Instead, in this paper, we propose to treat the insider threat by a detective measure that holds an insider accountable in case of violations. However, to enable accountability, we need to create causal models that support reasoning about the causality of a violation. Current security models (e.g., attack trees) do not allow that. Still, they are a useful source for creating causal models. In this paper, we discuss the value added by causal models in the security context. Then, we capture the interaction between attack trees and causal models by proposing an automated approach to extract the latter from the former. Our approach considers insider-specific attack classes such as collusion attacks and causal-model-specific properties like preemption relations. We present an evaluation of the resulting causal models’ validity and effectiveness, in addition to the efficiency of the extraction process.
AB - In the context of insiders, preventive security measures have a high likelihood of failing because insiders ought to have sufficient privileges to perform their jobs. Instead, in this paper, we propose to treat the insider threat by a detective measure that holds an insider accountable in case of violations. However, to enable accountability, we need to create causal models that support reasoning about the causality of a violation. Current security models (e.g., attack trees) do not allow that. Still, they are a useful source for creating causal models. In this paper, we discuss the value added by causal models in the security context. Then, we capture the interaction between attack trees and causal models by proposing an automated approach to extract the latter from the former. Our approach considers insider-specific attack classes such as collusion attacks and causal-model-specific properties like preemption relations. We present an evaluation of the resulting causal models’ validity and effectiveness, in addition to the efficiency of the extraction process.
UR - http://www.scopus.com/inward/record.url?scp=85097435994&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-62230-5_1
DO - 10.1007/978-3-030-62230-5_1
M3 - Conference contribution
AN - SCOPUS:85097435994
SN - 9783030622299
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 23
BT - Graphical Models for Security - 7th International Workshop, GraMSec 2020, Revised Selected Papers
A2 - Eades III, Harley
A2 - Gadyatskaya, Olga
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th International Workshop on Graphical Models for Security, GramSec 2020
Y2 - 22 June 2020 through 22 June 2020
ER -