Abstract
Model management is used in IT management to maintain an abstract representation of business- and IT-related artifacts. In enterprise architecture management (EAM), those models help to understand complexity, reason about changes and to achieve a holistic view on the enterprise. According to the IT4IT framework, models are used in various phases within the IT value stream. For example, low-fidelity models reflect the current state of each enterprise architecture layer and their interconnections, whereas run-time models capture elements of running systems. However, efficient model management faces numerous challenges, e.g. model creation and maintenance is still conducted manually, which is error-prone, cost-intensive and time-consuming. In addition, microservice architectures introduce a high level of complexity with regard to model management, as our empirical findings confirm. In order to tackle those challenges, research endeavours conducted in EAM as well as in model-driven engineering propose to automate the creation of models by collecting the required information from several information sources used along the IT value stream and to reassemble the scattered information into one central model management. However, as literature confirms several problems remain: the extraction of models lead to ambiguous documentation of the architecture and the merging of models lead to conflicts. Moreover, model management in EAM restricts modelling to current and future states of an enterprise. An approach for the automated maintenance and linking of all models required along the IT value chain in a central repository does not exist at this time. In this research, we present an approach that allows reconstructing the models from the individual phases of the IT value chain automatically based on runtime information and configuration files that are assigned to each application in the observed IT landscape. We transform the reconstructed models into a linked knowledge graph, which represents our central model management. This graph can be accessed via a uniform visual interface and query language. For this purpose, we design and develop a tool called MICROLYZE. We apply design-science as our research methodology to evaluate the developed concepts and the corresponding prototype. By conducting two case studies in two different companies, we assess the applicability of our approach and the prototype’s practicability. 19 interviews with practitioners from 17 different companies provide feedback about the proposed software-, process- and visualization design of the prototype. The reported experiences showed that \textit{MICROLYZE} is able to discover most of the models from each phase of the IT value stream and successfully connect them to a linked knowledge graph. The elaborated solution approach was positively received by the practitioners. However, the concept still has to be examined with respect to its scalability, whether the information base is truly fully recovered, as well as the ability to uncover the rationale behind architecture changes and certain runtime behaviour. Further concerns regarding data privacy and unintended employee monitoring still remain.
Original language | American English |
---|---|
Supervisors/Advisors |
|
State | Submitted - 16 Feb 2021 |