Search published articles



M. Nezhadshahbodaghi, K. Bahmani, M. R. Mosavi, D. Martín,
Volume 19, Issue 2 (6-2023)
Abstract

Today, it can be said that in every field in which timely information is needed, we can use the applications of time-series prediction. In this paper, among so many chaotic systems, the Mackey-Glass and Loranz are chosen. To predict them, Multi-Layer Perceptron Neural Network (MLP NN) trained by a variety of heuristic methods are utilized such as genetic, particle swarm, ant colony, evolutionary strategy algorithms, and population-based incremental learning. Also, in addition to expressed methods, we propose two algorithms of Bio-geography-Based Optimization (BBO) and fuzzy system to predict these chaotic systems. Simulation results show that if the MLP NN is trained based on the proposed meta-heuristic algorithm of BBO, training and testing accuracy will be improved by 28.5% and 51%, respectively. Also, if the presented fuzzy system is utilized to predict the chaotic systems, it outperforms approximately by 98.5% and 91.3% in training and testing accuracy, respectively.

 

Mohamed Hussien Moharam, Aya W. Wafik,
Volume 20, Issue 4 (11-2024)
Abstract

High peak-to-average power ratio (PAPR) has been a major drawback of Filter bank Multicarrier (FBMC) in the 5G system. This research aims to calculate the PAPR reduction associated with the FBMC system. This research uses four techniques to reduce PAPR. They are classical tone reservation (TR). It combines tone reservation with sliding window (SW-TR). It also combines them with active constellation extension (TRACE) and with deep learning (TR-Net). TR-net decreases the greatest PAPR reduction by around 8.6 dB compared to the original value. This work significantly advances PAPR reduction in FBMC systems by proposing three hybrid methods, emphasizing the deep learning-based TRNet technique as a groundbreaking solution for efficient, distortion-free signal processing.
Haniye Merrikhi, Hossein Ebrahimnezhad,
Volume 20, Issue 4 (11-2024)
Abstract

Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or objects—begin with an initial grasping step. Vision-based grasping requires the precise identification of grasp locations on objects, making the segmentation of objects into meaningful components a crucial stage in robotic grasping. In this paper, we present a system designed to detect the graspable parts of objects for a specific task. Recognizing that everyday household items are typically grasped at certain sections for carrying, we created a database of these objects and their corresponding graspable parts. Building on the success of the Dynamic Graph CNN (DGCNN) network in segmenting object components, we enhanced this network to detect the graspable areas of objects. The enhanced network was trained on the compiled database, and the visual results, along with the obtained Intersection over Union (IoU) metrics, demonstrate its success in detecting graspable regions. It achieved a grand mean IoU (gmIoU) of 92.57% across all classes, outperforming established networks such as PointNet++ in part segmentation for this dataset. Furthermore, statistical analysis using analysis of variance (ANOVA) and T-test validates the superiority of our method.
Mohamad Haniff Junos, Anis Salwa Mohd Khairuddin, Elmi Abu Bakar, Ahmad Faizul Hawary,
Volume 21, Issue 2 (6-2025)
Abstract

Vehicle detection in satellite images is a challenging task due to the variability in scale and resolution, complex background, and variability in object appearance. One-stage detection models are currently state-of-the-art in object detection due to their faster detection times. However, these models have complex architectures that require powerful processing units to train while generating a large number of parameters and achieving slow detection speed on embedded devices. To solve these problems, this work proposes an enhanced lightweight object detection model based on the YOLOv4 Tiny model. The proposed model incorporates multiple modifications, including integrating a Mix-efficient layer aggregation network within its backbone network to optimize efficiency by reducing parameter generation. Additionally, an improved small efficient layer aggregation network is adopted in the modified path aggregation network to enhance feature extraction across various scales. Finally, the proposed model incorporates the Swish function and an extra YOLO head for detection. The experimental results evaluated on the VEDAI dataset demonstrated that the proposed model achieved a higher mean average precision value and generated the smallest model size compared to the other lightweight models. Moreover, the proposed model achieved real-time performance on the NVIDIA Jetson Nano. These findings demonstrate that the proposed model offers the best trade-offs in terms of detection accuracy, model size, and detection time, making it highly suitable for deployment on embedded devices with limited capacity.
Humairah Mansor, Shazmin Aniza Abdul Shukor, Razak Wong Chen Keng, Nurul Syahirah Khalid,
Volume 21, Issue 2 (6-2025)
Abstract

Building fixtures like lighting are very important to be modelled, especially when a higher level of modelling details is required for planning indoor renovation. LIDAR is often used to capture these details due to its capability to produce dense information. However, this led to the high amount of data that needs to be processed and requires a specific method, especially to detect lighting fixtures. This work proposed a method named Size Density-Based Spatial Clustering of Applications with Noise (SDBSCAN) to detect the lighting fixtures by calculating the size of the clusters and classifying them by extracting the clusters that belong to lighting fixtures. It works based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN), where geometrical features like size are incorporated to detect and classify these lighting fixtures. The final results of the detected lighting fixtures to the raw point cloud data are validated by using F1-score and IoU to determine the accuracy of the predicted object classification and the positions of the detected fixtures. The results show that the proposed method has successfully detected the lighting fixtures with scores of over 0.9. It is expected that the developed algorithm can be used to detect and classify fixtures from any 3D point cloud data representing buildings.

Page 1 from 1     

Creative Commons License
© 2022 by the authors. Licensee IUST, Tehran, Iran. This is an open access journal distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.