The advancements in sensor and data acquisition technologies present profound opportunities to collect various types of data from systems. The collected data then can be used to optimally manage and control the systems. To construct intelligent systems, our laboratory conducts research about the following areas:

**Real-time data collection/processing framework**

To efficiently deal with a large volume of streaming data, raw sensor data should be collected and processed effectively in real time. We are investigating ways to organize data in a structured format, to design and extract features that are relevant to characterizing a target system, and to efficiently manage the data and the analytics model constructed using the collected data.

As an example, Figure bellow shows the procedure of collecting sensor monitoring data from a manufacturing milling machine and constructing analytics model from the data.

**Real-time adaptive learning algorithms for distributed (networked) systems**

To efficiently extract knowledge from data, Our lab explores the theoretical and practical aspects of machine learning algorithms, particularly ones designed for modeling a large-scale, networked systems. A majority of engineering systems are often composed of subsystems that are interacting each other to achieve global objectives. Their interaction pattern changes depending on environmental contexts. Thus, to effectively characterize such networked systems under various contexts, we study how to (1) model/extract the interaction patterns (i.e., causal relationships) among the subsystems, (2) incorporate the contextual information into the data-driven model.

Our laboratory mainly employs non-parametric Bayesian methods (i.e., Gaussian process and Dirichlet process) to model the behavior of complex target systems. Although the powerful representability and flexibility, such method generally suffer from high computational requirements. To resolve this, our lab explores various approximate methods to increase the computational efficiency of such method. The figure below summarizes our approaches to reducing the computational burden in constructing Gaussian process regression models.

**Decision-making strategies under uncertainty**

Most engineering systems continuously interact with a stochastic environment. The behavior of such systems in uncertain environments can be learned from the data, and the enhanced understanding can be exploited to make optimum decisions regarding the maintenance and operation of the systems. Our lab explores various sequential decision-making procedures (such as bandit problems, Bayesian Optimization, Markov decision process, Reinforcement Learning) in which learning about the target systems and optimizing the system response occur simultaneously. We focus on the conceptualization and the implementation of these algorithms to control target systems that interact with stochastic environments.

Figure below, as an example, shows the concept of Bayesian Ascent algorithm developed by incorporating into the Bayesian Optimization (BO) framework a strategy that regulates the search domain, as used in the trust region method. BA is composed of two iterative phases, namely learning and optimization phases. In the learning phase, the BA algorithm approximates the target function using Gaussian Process (GP) regression to fit the measured input and output of the target system. In the optimization phase, the BA algorithm determines the next sampling point to learn more about the target function (exploration) as well as to improve the target value (exploitation). A sampling strategy (in the optimization phase) is proposed such that the target value improves monotonically by gradually changing the input for a target system. Specifically, the input is selected within a trust region as one that achieves the largest expected improvement in the target value. Depending on the improvement of a target value, the size of the trust region is adaptively adjusted to expedite the convergence rate to reach optimum.