A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time inte
Dynamic Network Representation Based on Latent Factorization of Tensors
β Scribed by Hao Wu, Xuke Wu, Xin Luo
- Publisher
- Springer
- Year
- 2023
- Tongue
- English
- Leaves
- 89
- Series
- SpringerBriefs in Computer Science
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodesβ various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge.
In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency.
β¦ Table of Contents
Preface
Contents
Chapter 1: Introduction
1.1 Overview
1.2 Formulating a Dynamic Network into an HDI Tensor
1.3 Latent Factorization of Tensor
1.4 Book Organization
References
Chapter 2: Multiple Biases-Incorporated Latent Factorization of Tensors
2.1 Overview
2.2 MBLFT Model
2.2.1 Short-Term Bias
2.2.2 Preprocessing Bias
2.2.3 Long-Term Bias
2.2.4 Parameter Learning Via SGD
2.3 Performance Analysis of MBLFT Model
2.3.1 MBLFT Algorithm Design
2.3.2 Effect of Short-Term Bias
2.3.3 Effect of Preprocessing Bias
2.3.4 Effect of Long-Term Bias
2.3.5 Comparison with State-of-the-Art Models
2.4 Summary
References
Chapter 3: PID-Incorporated Latent Factorization of Tensors
3.1 Overview
3.2 PLFT Model
3.2.1 A PID Controller
3.2.2 Objective Function
3.2.3 Parameter Learning Scheme
3.3 Performance Analysis of PLFT Model
3.3.1 PLFT Algorithm Design
3.3.2 Effects of Hyper-Parameters
3.3.3 Comparison with State-of-the-Art Models
3.4 Summary
References
Chapter 4: Diverse Biases Nonnegative Latent Factorization of Tensors
4.1 Overview
4.2 DBNT Model
4.2.1 Extended Linear Biases
4.2.2 Preprocessing Bias
4.2.3 Parameter Learning Via SLF-NMU
4.3 Performance Analysis of DBNT Model
4.3.1 DBNT Algorithm Design
4.3.2 Effects of Biases
4.3.3 Comparison with State-of-the-Art Models
4.4 Summary
References
Chapter 5: ADMM-Based Nonnegative Latent Factorization of Tensors
5.1 Overview
5.2 ANLT Model
5.2.1 Objective Function
5.2.2 Learning Scheme
5.2.3 ADMM-Based Learning Sequence
5.3 Performance Analysis of ANLT Model
5.3.1 ANLT Algorithm Design
5.3.2 Comparison with State-of-the-Art Models
5.4 Summary
References
Chapter 6: Perspectives and Conclusion
6.1 Perspectives
6.2 Conclusion
References
π SIMILAR VOLUMES
<p><p>This book presents the outcomes of the workshop sponsored by the National Natural Sciences Foundation of China and the UK Newton Fund, British Council Researcher Links. The Workshop was held in Harbin, China, from 14 to 17 July 2017, and brought together some thirty young (postdoctoral) resear
<p><p>This book provides a collection of high-quality research works that address broad challenges in both theoretical and applied aspects of dynamic wireless sensor networks (WSNs) for intelligent and smart applications in a variety of environments. It presents the most central concepts associated
<p><span>In this title, the authors leap into a novel paradigm of scalability and cost-effectiveness, on the basis of resource reuse. In a world with much abundance of wirelessly accessible devices, WSN deployments should capitalize on the resources already available in the region of deployment, and
<p>In this title, the authors leap into a novel paradigm of scalability and cost-effectiveness, on the basis of resource reuse. In a world with much abundance of wirelessly accessible devices, WSN deployments should capitalize on the resources already available in the region of deployment, and only