The main contributions of this paper are summarized as follows:
• Development of a low-cost IoT-based hotel room occupancy detection system using radar sensing for both static and dynamic presence.
• Cloud-based storage and visualization through Firebase and a Django dashboard, enabling real-time and historical monitoring.
• Energy-efficient and sustainable operation using deep sleep, Wi-Fi Manager, and OTA-enabled remote maintenance.
• Demonstration of scalability across domains beyond hospitality, including smart homes, healthcare, and intelligent building systems.
An Explainable AI-Based Ensemble Machine Learning Framework for Early-Stage Diabetes Prediction
• Introduced the Explainable Ensemble Learning Framework (EELF) a Voting Classifier, that integrates Logistic Regression, Random Forest, and K-Nearest Neighbors with optimized hyperparameters to predict diabetes.
• Incorporated SHAP and LIME to enhance model interpretability by identifying key feature contributions, thereby improving clinician trust in the decision-making process.
• Conducted a comparative analysis, where the EELF achieved an accuracy of 81.16%, demonstrating strong potential for clinical application.
An Approach to Validate References in Scholarly Articles using RoBERTa
The significant research contribution of this paper lies in proposing a semi-automatic digital system for validating references in scholarly articles using RoBERTa-based semantic similarity analysis.
Key contributions include:
Introducing a novel framework that leverages RoBERTa embeddings with K-similar search to verify references against cited works.
Overcoming BERT’s input length limitations by applying document segmentation and preprocessing strategies for handling long research papers.
Achieving higher accuracy (F1-score: 0.777) compared to BERT and SBERT, demonstrating the effectiveness of RoBERTa for contextual similarity in reference validation.
Reducing reliance on manual cross-checking and peer reviewers, thereby streamlining the academic publication process while preserving reference authenticity.
Road Sign Detection Using YOLO’s Latest Releases: An Evaluation Study of v8, v10, and v11
This paper presents a
comparative analysis of three recent YOLO variants—YOLOv8,
YOLOv10, and YOLOv11—evaluated on a traffic sign detection
task under variable real-world visual conditions. The nano
variant of each model was evaluated in terms of precision, recall,
mean average precision (mAP), training efficiency, F1-confidence,
and runtime speed. This study offers practical insights for
deploying object detection models in intelligent transportation systems, aiming to balance real-time performance with detection accuracy. The results indicated that YOLOv8 achieved the highest mAP (0.92), followed by YOLOv11 (0.908) and YOLOv10 (0.873). In terms of runtime performance, YOLOv8 and YOLOv11 demonstrated comparable speeds on the test data, whereas YOLOv10 required more time to complete the inference process
Image captioning: the application of deep learning to improve the feeling expression and translation process
An ‘emojian’ algorithm was developed to manage this procedure. The enhanced caption produced, is then translated through Google/Argosopentech translate API, to the particular natural language picked manually, or through the automatic reading of the apparatus. Eventually, the translated enhanced mixed caption is combined with the scanned image, derived through the initial stage of this process. To assess and verify the effectiveness of the Emojian deep learning algorithm, we scrutinized the complete word count in the given text caption parameter relayed to our algorithm, and the complete emoji count delivered by the algorithm, each time a positive result is posted.
Analysis of Skin Effect and Transient Behavior in Transformer Bushings with Realistic Material Conductivities
This is important for power system
Modeling of RC Snubber, Ferrite Bead and Gate Drive Impedance for Optimal EMI Suppression and Switching Loss Trade-Off in SiC MOSFET Power Converters
The relentless drive towards extremely fast switching frequency, elevated operating voltage, increased thermal capabilities, and reduced switching losses have positioned Silicon Carbide (SiC) MOSFETs based converters at the forefront in high-performance power electronics applications. This brings in the benefits of enhanced switching frequencies, improved power density, and enhanced dynamic response. Unfortunately, this is critically constrained by severe Electromagnetic Interference (EMI), high-frequency ringing and undesirable switching oscillations. These challenges are addressed through a systemic modelling and analysis mitigation measures, the holistic co-optimizing ferrite bead on the gate loop, RC snubber, and gate drive impedance simultaneously. An LTspice simulation framework was developed, incorporating the manufacturer’s spice models to accurately model parasitics and quantify losses. The proposed methodology shows that the addition of the mitigation technique offers a practical trade-off between EMI suppression and switching performance, without increasing the switching losses.
IntelliGrid: A Hybrid CNN–LSTM Framework for Intelligent Fault Detection in Smart Grids Using PMU Data
The increasing penetration of renewable energy
and inverter-based distributed generation has introduced
significant challenges for fault detection in modern smart grids.
Traditional protection schemes often struggle to identify weak
or evolving fault signatures, leading to delayed fault isolation
and compromising system reliability. To address this gap, this
study introduces a hybrid deep learning framework that
integrates convolutional neural networks (CNNs) with long
short-term memory (LSTM) networks for intelligent fault
detection. The approach leverages phasor measurement unit
(PMU) image data, where CNN layers extract spatial fault
characteristics and LSTM layers capture temporal dynamics,
enabling a more comprehensive representation of fault
progression. Data preprocessing included normalization, class
rebalancing, and synthetic noise augmentation to ensure
robustness. Model performance was validated using stratified 5-
fold cross-validation, achieving 99% classification accuracy
while maintaining lower computational requirements compared
to CNN-only and ensemble-based baselines. Comprehensive
evaluation with ROC–AUC, PR–AUC, per-class accuracy, and
confusion matrices further demonstrated reliability and
interpretability. The findings highlight the potential of the
proposed method to enhance fault detection mechanisms,
contributing to improved grid stability, faster protection
response, and greater resilience of future smart energy systems
Real-Time Intrusion Detection in Smart EV Charging Networks Using Embedded Deep Learning
The increasing connectivity of Electric Vehicle Supply Equipment (EVSE) within smart grid networks has compounded the threat of cyber exposures, particularly Distributed Denial-of-Service (DoS) attacks. This paper introduces a lightweight, real-time intrusion detection system based on a hybrid deep learning structure that intermixes Transformer encoders and a Multilayer Perceptron (MLP) classifier. The proposed model uses kernel-level event logs to capture both temporal dependencies and high-dimensional feature interactions. A well-structured preprocessing pipeline includes feature leakage prevention, normalization, class balancing, and stratified cross-validation. This approach ensures data integrity and effective learning. Experimental evaluation on a real-world dataset confirms the model’s superior performance with 100% accuracy, precision, recall, and F1-score, and ideal ROC-AUC and PR-AUC scores. Furthermore, the framework is streamlined for deployment on resource-constrained edge devices to enable decentralized, on-device threat detection. These results accentuate the effectiveness and viability of the suggested solution in enhancing the cybersecurity posture of modern EV charging ecosystems
Data-Driven Condition Monitoring and Fault Detection of Power Transformers Using ML
Reliable transformer operation is critical for
minimizing downtime and ensuring power system stability.
Dissolved Gas Analysis (DGA) is the most widely used diagnostic
tool, yet ratio-based methods such as the Duval Triangle and
Key Gas Method often fail when signatures overlap or appear
at early fault stages. While machine learning has improved
accuracy, models remain vulnerable to noise, imbalance, and
overfitting. This paper proposes a CatBoost-based framework
that combines statistical and energy features of H₂, CO, C₂H₂,
and C₂H₄ gases with engineered ratios to capture complex intergas dependencies. With tuned hyperparameters, the model
achieved 97.6% overall accuracy and strong class-wise
performance: 98.9% (Normal), 94.7% (Partial Discharge),
93.9% (Low-Energy Discharge), and 89.0% (Low-Temperature
Overheating). Feature importance analysis identified H₂ and gas
ratios as key contributors, while training dynamics showed
rapid and stable convergence. The results demonstrate
robustness, interpretability, and efficiency, highlighting the
framework’s potential for real-time transformer fault detection
and improved power system resilience
