- Articles -
01
DLE-YOLO: An efficient object detection algorithm with dual-branch lightweight excitation network
Peitao Cheng, Xuanjiao Lei, Haoran Chen, Xiumei Wang
Citation
Peitao Cheng, Xuanjiao Lei, Haoran Chen, Xiumei Wang. DLE-YOLO: An efficient object detection algorithm with dual-branch lightweight excitation network[J]. Journal of Information and Intelligence, 2025, 3(2): 91-102. DOI: 10.1016/j.jiixd.2024.08.002
Abstract
As a computer vision task, object detection algorithms can be applied to various real-world scenarios. However, efficient algorithms often come with a large number of parameters and high computational complexity. To meet the demand for high-performance object detection algorithms on mobile devices and embedded devices with limited computational resources, we propose a new lightweight object detection algorithm called DLE-YOLO. Firstly, we design a novel backbone called dual-branch lightweight excitation network (DLEN) for feature extraction, which is mainly constructed by dual-branch lightweight excitation units (DLEU). DLEU is stacked with different numbers of dual-branch lightweight excitation blocks (DLEB), which can extract comprehensive features and integrate information between different channels of features. Secondly, in order to enhance the network to capture key feature information in the regions of interest, the attention model HS-coordinate attention (HS-CA) is introduced into the network. Thirdly, the localization loss utilizes SIoU loss to further optimize the accuracy of the bounding box. Our method achieves a mAP value of 46.0% on the MS-COCO dataset, which is a 2% mAP improvement compared to the baseline YOLOv5-m, while bringing a 19.3% reduction in parameter count and a 12.9% decrease in GFLOPs. Furthermore, our method outperforms some advanced lightweight object detection algorithms, validating the effectiveness of our approach.
扫描二维码或复制链接查看原文
https://www.sciencedirect.com/science/article/pii/S2949715924000751
02
Unsupervised meta-learning with domain adaptation based on a multi-task reconstruction-classification network for few-shot hyperspectral imageification
Yu Liu, Caihong Mu, Shanjiao Jiang, Yi Liu
Citation
Yu Liu, Caihong Mu, Shanjiao Jiang, Yi Liu, Unsupervised meta-learning with domain adaptation based on a multi-task reconstruction-classification network for few-shot hyperspectral imageification[J].Journal of Information and Intelligence, 2025, 3(2): 103-112. DOI: 10.1016/j.jiixd.2024.06.001
Abstract
Although the deep-learning method has achieved great success for hyperspectral image (HSI)ification, the few-shot HSIification deserves sufficient study because it is difficult and expensive to acquire labeled samples. In fact, the meta-learning methods can improve the performance for few-shot HSIification effectively. However, most of the existing meta-learning methods for HSIification are supervised, which still heavily rely on the labeled data for meta-training. Moreover, there are many cross-sceneification tasks in the real world, and domain adaptation of unsupervised meta-learning has been ignored for HSIification so far. To address the above issues, this paper proposes an unsupervised meta-learning method with domain adaptation based on a multi-task reconstruction-classification network (MRCN) for few-shot HSIification. MRCN does not need any labeled data for meta-training, where the pseudo labels are generated by multiple spectral random sampling and data augmentation. The meta-training of MRCN jointly learns a shared encoding representation for two tasks and domains. On the one hand, we design an encoder-classifier to learn theification task on the source-domain data. On the other hand, we devise an encoder-decoder to learn the reconstruction task on the target-domain data. The experimental results on four HSI datasets demonstrate that MRCN preforms better than several state-of-the-art methods with only two to five labeled samples per. To the best of our knowledge, the proposed method is the first unsupervised meta-learning method that considers the domain adaptation for few-shot HSIification.
扫描二维码或复制链接查看原文
https://www.sciencedirect.com/science/article/pii/S2949715924000544
03
Automated data processing and feature engineering for deep learning and big data applications: A survey
Alhassan Mumuni, Fuseini Mumuni
Citation
Alhassan Mumuni, Fuseini Mumuni. Automated data processing and feature engineering for deep learning and big data applications: A survey[J].Journal of Information and Intelligence,2025,3(2): 113-153. DOI: 10.1016/j.jiixd.2024.01.002
Abstract
Modern approach to artificial intelligence (AI) aims to design algorithms that learn directly from data. This approach has achieved impressive results and has contributed significantly to the progress of AI, particularly in the sphere of supervised deep learning. It has also simplified the design of machine learning systems as the learning process is highly automated. However, not all data processing tasks in conventional deep learning pipelines have been automated. In most cases data has to be manually collected, preprocessed and further extended through data augmentation before they can be effective for training. Recently, special techniques for automating these tasks have emerged. The automation of data processing tasks is driven by the need to utilize large volumes of complex, heterogeneous data for machine learning and big data applications. Today, end-to-end automated data processing systems based on automated machine learning (AutoML) techniques are capable of taking raw data and transforming them into useful features for big data tasks by automating all intermediate processing stages. In this work, we present a thorough review of approaches for automating data processing tasks in deep learning pipelines, including automated data preprocessing – e.g., data cleaning, labeling, missing data imputation, and categorical data encoding – as well as data augmentation (including synthetic data generation using generative AI methods) and feature engineering – specifically, automated feature extraction, feature construction and feature selection. In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
扫描二维码或复制链接查看原文
https://www.sciencedirect.com/science/article/pii/S2949715924000027
04
Cooperative target allocation for heterogeneous agent models using a matrix-encoding genetic algorithm
Shan Gao, Lei Zuo, Xiaofei Lu, Bo Tang
Citation
Shan Gao, Lei Zuo, Xiaofei Lu, Bo Tang. Cooperative target allocation for heterogeneous agent models using a matrix-encoding genetic algorithm[J].Journal of Information and Intelligence,2025, 3(2): 154-172. DOI: 10.1016/j.jiixd.2024.07.002
Abstract
Heterogeneous platforms collaborate to execute tasks through different operational models, resulting in the task allocation problem that incorporates different agent models. In this paper, we address the problem of cooperative target allocation for heterogeneous agent models, where we design the task-agent matching model and the multi-agent routing model. Since the heterogeneity and cooperativity of agent models lead to a coupled allocation problem, we propose a matrix-encoding genetic algorithm (MEGA) to plan reliable allocation schemes. Specifically, an integer matrix encoding is resorted to represent the priority between targets and agents in MEGA and a ranking rule is designed to decode the priority matrix. Based on the proposed encoding-decoding framework, we use the discrete and continuous optimization operators to update the target-agent match pairs and task execution orders. In addition, to adaptively balance the diversity and intensification of the population, a dynamical supplement strategy based on Hamming distance is proposed. This strategy adds individuals with different diversity and fitness at different stages of the optimization process. Finally, simulation experiments show that MEGA algorithm outperforms the conventional target allocation algorithms in the heterogeneous agent scenario.
扫描二维码或复制链接查看原文
https://www.sciencedirect.com/science/article/pii/S2949715924000659
05
A spatiotemporal graph wavelet neural network for traffic flow prediction
Linjie Zhang, Jianfeng Ma
Citation
Linjie Zhang, Jianfeng Ma. A spatiotemporal graph wavelet neural network for traffic flow prediction[J]. Journal of Information and Intelligence, 2025, 3(2): 173-188. DOI: 10.1016/j.jiixd.2023.03.001
Abstract
The traffic flow prediction is fast becoming a key instrument in the transportation system, which has achieved impressive performance for traffic management. The graph neural network plays a critical role in the development of the traffic network management. However, it is worthwhile mentioning that the complexity of road networks and traffic conditions makes it unable to obtain sufficient spatiotemporal information. In view of capturing precise environment characteristics, the context could have a precise effect on the prediction results while previous methods rarely took this into account. Besides, the nonlinear characteristics of the graph neural network are hard to quantify with fine granularity and to eliminate overfitting. To stack these challenges, in this paper, we present a spatiotemporal graph wavelet neural network to improve the ability of representations. Specifically, we introduce the wavelet transforms into the deep learning model according to the strong nonlinear optimization ability. Furthermore, we dig the location and time patterns to evaluate the temporal dependence and the spatial proximity correlation. In addition, we introduce a historical context attention mechanism giving fine-grained historical context grade evaluation to ease the phenomenon of over-smoothing. The experimental results on real-world datasets show that our work gets considerable results compared with the baseline and start-of-the-art models. Moreover, our work has better learning performance by employing the connection and interaction of graphs.
扫描二维码或复制链接查看原文
https://www.sciencedirect.com/science/article/pii/S2949715923000021