S. Bosse, D. Weiss, D. Schmidt, Supervised Distributed Multi-Instance and Unsupervised Single-Instance Autoencoder Machine Learning for Damage Diagnostics with High-Dimensional Data—A Hybrid Approach and Comparison Study, Computers 2021, 10(3), 34;
doi:10.3390/computers10030034 Paper PDFPublisher
Structural health monitoring (SHM) is a promising technique for in-service inspection of technical structures in a broad field of applications in order to reduce maintenance efforts as well as the overall structural weight. SHM is basically an inverse problem deriving physical properties like damages or material inhomogeneity (target features) from sensor data. Often models defining the relationship between predictable features and sensors are required but not available. The main objective of this work is the investigation of model-free Distributed Machine Learning (DML) for damage diagnostics under resource and failure constraints by using multi-instance ensemble and model fusion strategies and featuring improved scaling and stability compared with centralised single-instance approaches. The diagnostic system delivers two features: A binary damage classification (damaged or non-damaged) and an estimation of the spatial damage position in case of a damaged structure. The proposed damage diagnostics architecture should be able to be used in low-resource sensor networks with soft real-time capabilities. Two different machine learning methodologies and architectures are evaluated and compared posing low- and high-resolution sensor processing for low- and high-resolution damage diagnostics, i.e., a dedicated supervised trained low-resource and an unsupervised trained high-resource deep learning approach, respectively. In both architectures state-based recurrent Artificial Neural Networks are used that process spatially and time-resolved sensor data from experimental ultrasonic guided wave measurements of a hybrid material (carbon fibre laminate) plate with pseudo defects. Finally, both architectures can be fused to a hybrid architecture with improved damage detection accuracy and reliability. An extensive evaluation of the damage prediction by both systems shows high reliability and accuracy of damage detection and localisation, even by the distributed multi-instance architecture with a resolution in the order of the sensor distance.
S. Bosse, Distributed Serverless Chat Bot Networks using mobile Agents: A Distributed Data Base Model for Social Networking and Data Analytics, 13th International Conference on Agents and Artificial Intelligence (ICAART),
Online, Worldwide, 4-6-2.2021 Paper PDFPresentation VIDEOPresentation SLIDESConference
Today human-machine dialogues performed and moderated by chat bots are ubiquitous. Commonly, centralised and server-based chat bot software is used to implement rule-based and intelligent dialogue robots. Furthermore, human networking is not supported. Rule-based chat bots typically implement an interface to a knowledge data base in a more natural way. The dialogue topics are narrowed and static. Intelligent chat bots aim to improve dialogues and conversational quality over time and user experience. In this work, mobile agents are used to implement a distributed, decentralised, serverless dialogue robot network that enables ad-hoc communication between humans and machines (networks) and between human groups via the chat bot network (supporting personalized and mass communication). I.e., the chat bot networks aims to extend the communication and social interaction range of humans, especially in mobile environments, by a distributed knowledge and data base approach. Additionally, the chat bot network is a sensor data acquisition and data aggregator system enabling large-scale crowd-based analytics. A first proof-of-concept demonstrator is shown identifying the challenges arising with self-organising distributed chat bot networks in resource-constrained mobile networks. The novelty of this work is a hybrid chat bot multi-agent architecture enabling scalable distributed and adaptive communicating chat bot networks.
S. Bosse, Parallel and Distributed Agent-based Simulation of large-scale socio-technical Systems with loosely coupled Virtual Machines, Proc. of the SIMULTECH Conference 2021, International Conference on Simulation and Modeling Methodologies, Technologies and Applications,
Paper PDFPresentation VIDEOPresentation SLIDESConference
Abstract. Agent-based systems are inherently distributed and parallel by a distributed memory model, but agent-based simulation is often characterised by a shared memory model. This paper discusses the challenges of and solution for large-scale distributed agent-based simulation using virtual machines. Simulation of large-scale multi-agent systems with more than 10000 agents on a single processor node requires high computational times that can be far beyond the constraints set by the users, e.g., in real-time capable simulations. Parallel and distributed simulation involves the transformation of shared to a communication-based distributed memory model that can create a significant communication overhead. In this work, instead distributing an originally monolithic simulator with visualisation, a loosely coupled distributed agent process platform cluster network performing the agent processing for simulation is monitored by a visualisation and simulation control service. A typical use case of traffic simulation in smart city context is used for evaluation the performance of the proposed DSEJAMON architecture.
D. Lehmhus, S. Bosse, A. Mounchili, A. Struß, Putting Stiffness where it’s needed: Optimizing The Mechanical Response of Multi-Material Structures, ICEAF VI - 6th International Conference of Engineering Against Failure,
22-25-6-2021 Presentation SLIDESConference
Modern manufacturing processes like multi-material additive manufacturing or, to a lesser degree, compound casting, allow almost arbitrary distribution of different materials, or, for that matter, different density levels, over a component‘s volume. The difficulty lies in the optimal spatial material distribution. Multi-Phase Topology Optimization (MPTO) is one approach towards this end. This method relies on iterative, linear-elastic FEM simulations which provide element- as well as part-level data on elastic strain energy. This information is used to redistribute predefined material fractions characterized by different values of Young’s Modulus according to their relative properties in order to minimize the total strain energy under a given design load. Solving such a minimization problem is the central part of this work. Achieving this aim means that a configuration has been identified which provides maximum stiffness. This said, the present study compares different material redistribution and optimization techniques based on genetic algorithms and simulated annealing and compares them in terms of their optimization results, applicability, relative performance and scalability. Specifically, unconstrained (randomized) model-free approaches using Monte Carlo methods are contrasted to others incorporating physically or technically justified constraints limiting the configuration space during the re-association of materials properties to the individual finite elements. Typically, the minimization problems delivers a set of solutions. Iterative minimization algorithms tend to settle in local non-optimal minimum states. Evolutionary as well as simulated annealing deploys partial randomization for the generation of new configurations, a key methodology to explore the search space in a much larger volume than classical gradient-based algorithms do. In contrast to simulated annealing, genetic algorithms construct new material configuration from previous solutions. The cost functions used by both approaches depends on FEM simulation, a computational intensive task. To minimize FEM simulation and cost function calculation, approximating caching and configuration pre-analysis/selection using constrain models are introduced.
Typical data surveys as human-centred data source reflect only snapshots of dynamical systems on the time-scale. Commonly, in social science surveys are performed in participatory way and by well designed (static) surveys. But crowd sensing gains attraction to collect either supplementary data or aiming to replace traditional survey formats moving towards ad-hoc opportunistic micro-surveys . The data quality of such crowd sourced data is varying and often questionable with high bias and missing values . Ubiquitous and mobile devices gain attraction as data sources with a high spatial and temporal coverage, e.g., smart phones. Continuous sampling of data streams can improve quality of statistical data analysis, generalisation of predictive modelling, and simulation significantly. We present an unified agent-based data collection, aggregation, analysis, and tightly coupled simulation methodology, providing valuable contribution to Computational Social Science (CSS), at least theoretically. Mobile computational agents (mobile software processes) are used for self-organising data collection and aggregation by using machine data and user data via scriptable dialogues. This approach extends the data collection process in the spatial and temporal domain providing a high data coverage and quality, required, e.g., by accurate ML methods. The issues and challenges of long-term self-organising mobile crowd sensing are discussed and analysed with some practical demonstrations in comparison with theoretical expectations.