https://www.jcbi.org/index.php/Main/issue/feedJournal of Computing & Biomedical Informatics2026-03-01T00:00:00+00:00Journal of Computing & Biomedical Informaticseditor@jcbi.orgOpen Journal Systems<p style="text-align: justify;"><strong>Journal of Computing & Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing & Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p>https://www.jcbi.org/index.php/Main/article/view/1216Hybrid Image Encryption Using LWE-Based Post-Quantum Key Encapsulation and Chaos-Based Symmetric Encryption2026-01-16T16:56:55+00:00Bharti Ahuja Salunkebharti.salunke99@gmail.comSharad Salunkesharad.sal@gmail.com<p>The presented paper introduces a hybrid image-encryption system based on Learning with Errors (LWE)-based post-quantum key encapsulation and chaos-based symmetric encryption in order to secure visual information with reference to the classical and post-quantum threats. LWE is used as a secure means to generate and defend symmetric encryption keys, and chaotic maps as a pseudo-random keystream generator with high throughput and confusion / diffusion. Instead of asserting to provide unconditional quantum security, the framework is said to be tested with reference to explicit adversarial schemes, effective key length under Grover scheme and parametrized lattice hardness and pragmatic cryptography schemes. Experimental evidence on classical and medical images has shown that it can be heavily diffused, high entropy, low ciphertext correlation, and sound decryption, and has been analyzed in terms of runtime and scalability. The findings reveal that the considered design offers a convenient hybrid solution to post-quantum-conscious image encryption.</p>2026-02-27T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1201Bridging the Gap: Real-Time American Sign Language Recognition Using a Somatosensory Glove2026-01-10T21:37:18+00:00Rawal Khan01-133222-064@student.bahria.edu.pkNadia Sultannadiaimran.buic@bahria.edu.pkJoddat Fatimajoddat.fatima@bahria.edu.pk<p>Sign Language (SL) is a main language for millions of Deaf and Hard-of-Hearing (DHH) people – yet a huge communication barrier still exists, as almost all hearing people do not know SL. Vision-based SLR methods have come a long way, but they still face problems like illumination variations, background clutter, hand occlusion and privacy issue whereas commercial glove-based devices are often expensive and not as portable. This paper introduces a somatosensory glove-based ASL recognition system with wireless capability, able to recognize both static and dynamic American Sign Language (ASL) gestures by flex and inertial sensing fusion. The data were collected by a wired interface to allow noise-free and high-fidelity signal acquisition. Two custom datasets of 19 gestures including 15 static and 4 dynamic were collected from 16 participants respectively on the order of about 8000–9500 labelled samples. Three machine learning based models, XGBoost, RF and MLP were used to train the gesture classifier. For them, XGBoost obtained the most robust performance, achieving sample-level cross-validated accuracies of 97.6% and 99.2% for static and dynamic gestures, respectively. RF and MLP gave competitive baseline results. The results emphasize the power of low-cost wearable sensing and machine-learning-based classification and provide a viable, privacy-sensitive path to scalable near-real-time ASL recognition systems.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1271RAGE-Fusion: Reliability-Aware Multimodal Emotion Fusion for Real-Time Interactive Interfaces2026-02-06T09:19:12+00:00Yunxue Guanguanyunxue@kookmin.ac.krYingying Zhukr170089@163.comYim Jinhohci.yim@kookmin.ac.kr<p>Emotional responsive interactive interfaces can enhance the user experience; they can change the response and presentation depending on the affective condition of the user. But implementing these systems to practical application is still difficult since emotion cues are multimodal, noisy, and frequently absent (e.g. webcam turned off, poor audio, occlusions) and interactive systems also demand low latency and predictive behaviour so as to keep the user trusting them. The paper is about RAGE-Fusion, which is a reliability-conscious multimodal deep learning system used in emotion recognition and interface adaptation, which models text, audio, and visual information together. RAGE-Fusion is a cross-modal attention and pretrained modality encoder architecture that integrates the complementary affective information and a reliability-gated fusion mechanism, elaborating on the weighting of each modality in the case of missing or corrupted input based on an inferred quality improvement in robustness. In order to fit affect recognition to interactive limitations, we also introduce a multi-objective optimization plan, balancing the performance of emotion prediction, the inference latency, and prediction temporal consistency between conversational turns. Simulations of the MELD benchmark show that it steadily outperforms unimodal and baseline fusion baselines especially when there is modality drop and noise. Calibration and stability analysis are reported by us as well to facilitate a safe interface adaptation decision. The findings reveal that reliability-conscious fusion and interaction-based optimization are a viable basis in development of robust and real-time emotion-conscious interfaces.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1214A Robust Explainable Deep Learning Ensemble for Early Skin Cancer Diagnosis2025-12-30T19:35:22+00:00Hammad Alihammadaly.6229@gmail.comMuhammad Rizwan Rahsid Ranarizwanrana315@gmail.comAbdul Samisamisohail707@gmail.com<p>Skin cancer is one of the most common types of malignancies around the world, and the ability to detect skin cancers in an early stage is crucial for improving overall patient outcomes. This study introduces a hybrid deep learning framework that utilizes self-supervised pretraining, multi-architecture ensemble learning, and explainable AI approaches to enable accurate and interpretable skin cancer diagnosis. This framework uses SimCLR-based contrastive learning techniques to generate powerful feature representations from large data sets of unlabeled images of dermatoscopic images before implementing either supervised fine-tuning processes or feature-level fusion processes on three different types of architectures (EfficientNetV2-L, Swin Transformer, and ConvNeXt). In order to classify patients using the features derived from the different architectures, a meta-learning classifying component based on LightGBM is built into the model and provides explainability through the Grad-CAM and SHAP explainable AI methods. The results of the experiments performed with benchmark datasets (ISIC, and HAM10000) demonstrate the proposed method outperformed previously established baseline models by a wide margin, achieving 94.5% accuracy, 92.55% precision, and 93.26% recall, providing evidence of the robustness, high sensitivity, and reliability of the proposed method in the early detection of skin cancer.</p>2026-02-18T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1245Application of AI-Assisted Cognitive Behavioral Intervention Platform in Alleviating Performance Anxiety2026-02-06T09:56:29+00:00Junyao Wangwangjuny1994@163.comHyuntai Kimkimht@sejong.ac.kr<p>Performance anxiety is a universal psychological challenge for people in situations where performance is highly significant such as examinations, interviews, public speaking, and performing in the arts. While the use of cognitive behavioral therapy (CBT) as an evidence-based approach for managing anxiety is well established, traditional formats in which the method is delivered are limited in terms of accessibility, personalization, and engagement. New frontiers of artificial intelligence (AI) and digital mental health technologies can provide fresh opportunities for augmenting CBT delivery using adaptive, scalable, and user-centered technologies. This paper introduces the design, implementation and evaluation of an AI assisted cognitive behavioral intervention (CBI) platform that seeks to alleviate performance anxiety. The proposed system combines repeated micro-check-ins, contextual awareness (e.g. event type and time-to-event), and interaction-level engagement signals, to build a dynamic representation of the user state. Based on this state, the platform provides CBT consistent micro-interventions by a two-stage adaptive mechanism, combining rule-constrained candidate filtering with personalized utility based ranking. In order to promote responsible deployment, the system can include an explainability layer that presents transparent and decision-level rationales for recommendations, and ethical and safety guardrails that can ensure non-diagnostic and supportive use of the system. Experimental results show that the AI-assisted adaptive platform is more effective for short-term reductions in self-reported performance anxiety than static CBT delivery and heuristic adaptive baselines. In addition, we see improvements in user engagement, adherence, and perceived trust, underlining the importance of personalization and transparency in digital mental health interventions. The findings suggest that the CBT delivery with the support of AI can offer effective and scalable support for performance anxiety if the human-centered and ethically grounded boundaries are followed.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1195Primary User Detection in Cognitive Radios: Challenges, Techniques, and Emerging Solutions2026-01-03T21:04:10+00:00Shraddha Nitin Magdumshraddhagaji1993@gmail.comTanuja Satish Dhope Shendkartanuja_dhope@yahoo.com<p>Cognitive Radio Networks (CRNs) address spectrum scarcity through intelligent spectrum management, enabling dynamic spectrum access for secondary users. However, traditional spectrum sensing techniques struggle with noise sensitivity and unstable Primary User (PU) dynamics, particularly in low Signal-to-Noise Ratio (SNR) environments. This paper proposes an Attention-based Deep Cognitive Network (ADCN) that integrates convolutional layers for spatial feature extraction, Long Short-Term Memory (LSTM) networks for temporal dependency modeling, and a self-attention mechanism to dynamically prioritize critical time-frequency characteristics. The paper presents a prototype of Attention-based Deep Cognitive Network (ADCN), which aims at improving the detection of PU under noisy and dynamic conditions. The suggested architecture combines the convolutional layers (as a spatial feature extractor) with Long Short-Term Memory (LSTM) networks (as a practical model of time dependencies) as well as the use of self-attention to highlight important time–frequency features. The data utilized to train and test the model is the CSRD2025, and the levels of SNR used are between -20 dB and 10 dB. As shown in the experimental results, ADCN attains a bit error rate of 0.12 at -20 dB, which is considerably better than Energy Detection (0.60) and Matched Filter Detection (0.30). The model also provides lesser false alarm rates and greater rates of detection and is adaptable to various patterns of PU activity. These results indicate that ADCN would be a powerful and efficient solution to next-generation CRNs, which can be used to optimize the spectrum and work in low-SNR settings.</p>2026-02-18T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1243Reimagining Calligraphy Education in Higher Education through Artificial Intelligence and Interdisciplinary Pedagogy2026-02-06T10:12:12+00:00Yushuai LiuLiuys1232024@126.comAmer Shakir Bin Zainolamers781@163.com<p>The Chinese calligraphy, the intangible heritage held in high esteem, is challenged with the traditional mode of the calligraphic teaching, which is still labor-intensive and hard to be scaled. To deal with this, the current developments in artificial intelligence (AI) and interdisciplinary STEAM (Science, Technology, Engineering, Arts, Mathematics) teaching are potentially transformative. This paper examines the combination of AI-based feedback and project-based learning in calligraphy classes as the way to integrate the artistic tradition with technological innovation. The research design used a quasi-experimental design (n=642) comprising of undergraduate students, n=321, of a large research university were randomly selected to be assigned to either an experimental group (n=321) with AI-enhanced instruction or to a control group (n=321) with traditional methods. An experimental curriculum was used, which consisted of a computer vision pipeline (grayscale conversion, Gaussian blur, Canny edge detection, and ResNet-50-based convolutional neural networks) trained on 40,000 samples of calligraphy and used to produce saliency maps and formative AI feedback. The students were engaged in interdisciplinary STEAM-based activities involving the connections between brushwork, geometry, physics, and chemistry and cultural studies. A significant improvement in the performance of the experimental group by 7.8 points (p < 0.001) of the post-test results and effect sizes of 0.78 to 1.12 were observed. The mediation analysis showed that the frequency of AI feedback did not directly enhance aesthetic proficiency by increasing self-efficacy, as opposed to the importance of the pedagogical design rather than the frequency of feedback. These results emphasize that the AI-based stroke analysis, combined with an effective STEAM approach, positively affects both interest and cross-cultural awareness, which implies the extensive applicability of the Calligraphy 2.0 model to the renewal of traditional arts and the maintenance of cultural heritage via digital humanities. The model provides a means of changing the conventional form of art education to an interactive, data-oriented, experience, which can be applied to all global heritage practices.</p>2026-03-04T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1211Confidence-Calibrated Dual-Branch Detection of Oral Cancer from Tongue and Lips Images2026-01-16T13:36:41+00:00V. Gokula Krishnangokul_kris143@yahoo.comArvind Kumar Tiwariarvind@knit.ac.inM. Sumithramsumithra@panimalar.ac.inG. Mahalakshmig.mahalakshmi@velhightech.comN. Subhash Chandrasubhashchandra@cvr.ac.inM. Ganesanganesan.m@eec.srmrmp.edu.in<p>In order to detect oral cancer early from photos of the tongue and lips, this research introduces a confidence-calibrated, dual-branch framework. A lightweight texture branch (MLBP/HOG) maintains micro-texture, a global CNN encodes colour-shape context, and an attention gate fuses branches per image. Since pixel-level annotations are unavailable, we guide the model’s attention using CAM-consistency regularization to improve lesion localization under weakly supervised training. Improved cross-site robustness is achieved through domain-adversarial alignment, while probability outputs are calibrated through temperature scaling. With stratified evaluation, the model achieves the following on the Oral Cancer (Lips & Tongue) dataset: Brier 0.092, Accuracy 0.892, Macro-F1 0.883, AUROC 0.912, AUPRC 0.884, and ECE reduces from 0.067 to 0.031 after calibration. Low post-calibration ECE (0.029/0.033) and high site-wise performance (Lips AUROC 0.922; Tongue 0.902) are maintained. By combining the texture branch, CAM-consistency, and domain alignment, ablation demonstrates cumulative benefits: when compared to a baseline CNN, the combined performance is the best with minimal compute overhead (AUROC 0.872; AUPRC 0.834; ECE 0.050). When considering utility, a threshold θ* = 0.50 equals Includes a PPV of 0.846, NPV of 0.897, Coverage of 87.2%, and Referral of 12.8%; Sensitivity of 0.892; and Specificity of 0.852. Trustworthy triage is supported by the system's calibrated probabilities and CAM overlays, and real-world deployment on cloud or mobile platforms is encouraged by its robustness to site variability. Practical and reliable photo-based oral-cancer screening relies on complementary features, targeted regularization, and explicit calibration, according to the results.</p>2026-02-18T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1194Edge-to-Cloud Continual Learning for Privacy-Preserving Chronic Disease Management2026-01-03T20:13:45+00:00D. Suresh Babuvidyasagar.voorugonda@nmims.eduV. Vidyasagarvidyasagar.voorugonda@nmims.eduB. Sarithavidyasagar.voorugonda@nmims.eduNamita Paratividyasagar.voorugonda@nmims.eduNagamani Chippadavidyasagar.voorugonda@nmims.eduVeeramachaneni Dhanasreevidyasagar.voorugonda@nmims.eduChinmayi Sree Chitra Channapragadavidyasagar.voorugonda@nmims.edu<p>Chronic disease management requires continuous monitoring and adaptive treatment strategies, yet traditional healthcare systems suffer from fragmented data collection and reactive interventions. This study presents an edge-to-cloud continual learning architecture that integrates wearable biosensor networks, longitudinal patient data, and privacy-preserving machine learning to enable personalized treatment recommendations. The system employs a three-tier computational model: edge devices perform low-latency real-time signal processing (135 ms), cloud servers provide secure storage and federated model aggregation, and continual learning algorithms adapt treatment plans as patient conditions evolve. The architecture implements iCaRL-based incremental learning with K=500 exemplar replay, combined with differential privacy (ε=2.1) and Paillier homomorphic encryption to protect patient confidentiality during model updates. A prospective clinical validation study enrolled N=132 patients (diabetes n=77, cardiac n=30, respiratory n=25) across three urban clinics over 12 weeks. The edge-to-cloud system achieved 92.6% treatment recommendation accuracy (95% CI: 90.1-95.1%), representing a 6% improvement over cloud-only baseline (86.8%) and 17.6% improvement over static models (75.0%). The hybrid architecture reduced end-to-end latency by 65.3% compared to cloud-only processing (255 ms vs. 735 ms), meeting the <2-second requirement for acute clinical alerts. Privacy evaluation demonstrated membership inference attack AUC of 0.50 (indicating formal privacy safety, threshold ≤0.55) while maintaining clinical accuracy. Backward transfer analysis showed 98.1% retention of prior knowledge after 100 learning rounds, with only 0.2% degradation, demonstrating effective mitigation of catastrophic forgetting. These results establish the feasibility of privacy-preserving, adaptive chronic disease management systems that combine edge intelligence with cloud-based population learning while maintaining patient confidentiality and clinical effectiveness.</p>2026-02-27T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1147A Personalized Federated Learning Framework for Post-Event Forensic Traffic Analysis in Autonomous Vehicle Systems2025-11-20T22:18:37+00:00Saadia Banosaadiabano16@gmail.comIsmail Kashifsaadiabano16@gmail.com<p>With the growing prevalence of autonomous vehicles (AVs) in modern transportation systems, exploring post-incident forensic analysis into their operational data is becoming increasingly important for liability evaluations and traffic safety studies. But tough privacy laws, exclusive control over data ownership and proprietary platform architectures all make it challenging for different AV entities to gain hands-on access to raw sensor and telemetry data. To tackle these issues, in this paper we propose a privacy-preserving federated learning framework designed for the post-event forensic traffic analysis in an autonomous vehicle system. The potential of the proposed method lies in that manufacturers, infrastructure providers and regulatory agencies can collaborate an intelligence attack without exhibiting or exchanging any type of sensitive local data to preserve the data privacy and regulation rules. The network is a spatiotemporal deep learning model, which incorporates temporal, spatial and attention mechanism to effectively restore vehicle trajectories as well as identify abnormal driving behavior in intricate traffic scenes. In addition, we propose a client-specific adaptation strategy to adapt to the diversity of AV platforms and traffic patterns for personalized learning while not compromising global model performance. In order to facilitate scalability and deployment opportunity, we employ model compression scheme for minimizing communication overhead during federated updates. Experimental results performed on simulated and real AV datasets show that the proposed approach can simultaneously achieve robust trajectory reconstruction, effective anomaly detection with strong privacy guarantee and communication efficiency. Quantitative results also determine an improvement of around 15% in trajectory prediction accuracy over standard FedAvg, alongside nearly 30% reduction in communication overhead.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1204Machine Learning-Based Classification of SARS-CoV-2 Structural Proteins Using Amino Acid Composition Analysis2026-01-12T17:02:34+00:00Anam Fatimabscs22f01@namal.edu.pkNasreenbscs22f10@namal.edu.pkHarram Sattarbscs22f07@namal.edu.pkMuhammad Bilalmuhammad.bilal@namal.edu.pkShafiq ur Rehman Khanshafiq.rehman@namal.edu.pk<p>The classification of COVID-19 protein types is important for understanding viral structure. This study presents a comprehensive machine learning approach for classifying four major COVID-19 protein types which are Spike, Membrane, Envelope, and Nucleocapsid proteins. We collected 40,000 protein sequences from the NCBI protein database, representing 10,000 sequences for each protein type through automated web scraping and parsing techniques. After processing the data and removing outliers, we obtained a dataset of 28,206 proteins. We used five machine learning algorithms which included Random Forest, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Decision Tree Classifier, and Logistic Regression. We evaluated the models using accuracy, precision, recall and F1 score metrics. The result showed that K-Nearest Neighbors classifier achieved the highest accuracy of 98%. Feature importance analysis revealed that sequence length and specific amino acids are the main factors that provided biological insights into the differences between COVID-19 protein types. Our results show the effectiveness of amino acid composition-based features for COVID-19 protein classification. The feature importance analysis revealed key biological insights into the differences between the structures of protein and provided an efficient framework for automated protein type identification.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1222Synergistic Fusion of Clinical Interview EEG and Video for Depression Detection: A Cross-Modal Attention Approach2026-01-16T07:49:27+00:00Janaswami Hymavathihymavathi.j.kluscholar@gmail.comChokka Anuradhadranuradha@kluniversity.in<p>Objective quantification of Major Depressive Disorder (MDD) remains a substantial clinical challenge due to the inherent subjectivity of traditional diagnostic interviews. This paper presents a novel multimodal deep learning framework that synergistically integrates neurophysiological signals and behavioural cues for automated depression detection. Utilizing the Multi-modal Open Dataset for Mental-disorder Analysis (MODMA), we analyze synchronized 128-channel EEG and video recordings obtained during professional clinical assessments. Our architecture employs a dual-stream approach: a Graph Convolutional Network (GCN) combined with a Long Short-Term Memory (LSTM) network to capture the spatiotemporal dynamics of brain activity, and a 3D Convolutional Neural Network (3D-CNN) with a temporal attention mechanism to extract behavioral markers from facial expressions. A sophisticated cross-modal attention module is implemented to fuse these modalities, allowing the model to learn the complex interdependencies between neural states and overt behavior. To ensure clinical generalizability and prevent data leakage, the framework was evaluated using a strict subject-independent 10-fold cross-validation scheme. Experimental results demonstrate latest performance, achieving an Accuracy of 92.1 % and an F1-Score of 92.5 %. These findings suggest that the proposed multimodal integration offers a powerful and objective tool for mental health screening, enhancing diagnostic precision through the fusion of brain and behavioral biomarkers.</p>2026-02-20T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1266Physics-Aware Graph Neural Networks for Real-Time Defect Detection and Environmental Impact Mitigation in Industrial Welding2026-02-06T09:44:13+00:00Ziyuan Kangkzy1127129260@163.com<p>Resistance Spot Welding (RSW) is a basic and energy-consuming technology in industrial production, where the interaction of electrical, thermal, and mechanical variables usually hides the connection between the quality of the process and its environmental impact. We introduce a new Spatio-Temporal Graph Neural Network (STGNN) framework solver optimistic of the twin-objective of real-time defect detection and environmental emission reduction. Through the theoreticalization of the welding process as a dynamic graph with voltage, current and force sensor nodes taking the place of nodes, we use Graph Temporal Transformers and Graph Attention Networks (GATv2) architecture to decode the transient cross-channel relationships throughout the welding cycle. This methodology generates a Physics-Aware latent space which models the spatial dynamics of the electrodes as well as the time dynamics of the weld nugget. Extensive benchmarking on seven variants of deep learning and ensemble techniques shows that our framework attains near-perfect stability on regression, where the highest score of and the MAPE of represent the best result in emission proxy modeling. Although there was an extreme imbalance on the industrial class (4% defect interactions), the suggested architecture was able to isolate defect signatures in a high-contrast 3D feature space (Deep Blue for Optimal vs. Crimson Red for Defective). Numerical validation supports ultra-low inference latencies down to the range of ms/sample allowing integration into high rate production systems without problems. This study gives a clear direction of the so-called Zero-Defect green manufacturing, as it proves that the graph-based reasoning can successfully decouple the industrial productivity and environmental externalities.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1140Systematic Literature Review on Computational Models Used For Sign Language Recognition2025-11-20T22:02:23+00:00Mohsin Samimohsin.sami@ucp.edu.pkRabia Tehseenrabia.tehseen@ucp.edu.pkUzma Omerrabia.tehseen@ucp.edu.pkMuhammad Farrukh Khanrabia.tehseen@ucp.edu.pkShahan Yamin Siddiquirabia.tehseen@ucp.edu.pkNabeel Sabir Khannabeel.bloch@ucp.edu.pkDanish Ali Khanrabia.tehseen@ucp.edu.pk<p>Sign Language Recognition (SLR) is a popular research area, but it’s not much focused due to its complex nature and resource limitation. In this review, a unique method for developing a SLR have been studied in which an automatic sign-language recognition system has been proposed. A comprehensive review of different studies and working models from 2015 to 2025. Total 60 different studies with different methodology are reviewed in this systematic literature review. It has been found that American Sign Language (ASL) is one of the most commonly used data set for various studies. MediaPipe Holistic model, Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), Artificial Neural Network (ANN) and Support Vector Machine (SVM) are some of the techniques which are most focused in various studies. Our work is unique, we have presented a comprehensive taxonomy of approaches and we established timeline of approaches that have been focused in literature guiding us to suggest which approach can be followed in future. We have also identified the most focused dataset, mostly processed in literature and region focused. As valuable contribution in SLR, our systematic literature review presents state of the art review exploring multiple dimensions of SLR field and would serve research.</p>2026-02-18T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1244Deep Learning–Based Emotion Classification Models for Chinese and Korean OST Music2026-02-06T10:09:30+00:00Quanrui Lu18845075075@163.comHyuntai Kimkimht@sejong.ac.kr<p>Music Emotion Recognition (MER) has made significant advancements with deep learning, however, existing models tend to have cultural bias wherein they are not good at recognizing the emotion of non-Western musical structures. This paper proposes a deep learning framework designed especially for the emotion classification in Chinese and Korean Original Soundtracks (OSTs), which have unique tonal dynamics and a high variance in emotions. We propose a Dual-Stream Convolutional Recurrent Neural Network (CRNN) with Self-Attention, which is able to capture the spectral spatial characteristics and the temporal melodic developments, commonly found in Asian cinematic music. To validate the model, we use two region-specific datasets namely PMEmo (Chinese popular music) and EMOPIA (Korean/Asian piano OSTs). Experimental results show that our proposed architecture can obtain an accuracy of 88.4% and F1-score of 0.87, which outperforms baseline models (ResNet-50 and standard LSTM) with 5.2% margin. The research helps to confirm that the training data for culturally-aware training is vital for accurate affective computing within the music domain.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1192Explainable Multimodal Fusion of Genomic and Clinical Data for Multi-Disease Prediction: A Deep Learning Approach2026-01-03T18:30:19+00:00C. Raghavendracrg.svch@gmail.comR. Suneetha Ranicrg.svch@gmail.comCH. N. Santhosh Kumarcrg.svch@gmail.comVeeramachaneni Dhanasreecrg.svch@gmail.comD. Swapnacrg.svch@gmail.comGoguri Rashmithacrg.svch@gmail.com<p>Precision medicine is an effort to customize healthcare treatment based on individual-specific genetic, clinical, and environmental traits. This study introduces an explainable AI platform to fuse genomic and clinical information to enhance disease prediction and individualized treatment regimens. Pre-processed, normalized, and multi-modal datasets consisting of whole-genome sequencing, gene expression data, and electronic health records were integrated through a hybrid data fusion process. Feature engineering and dimensionality reduction techniques were utilized to discover biological and clinical significant patterns, which was followed by meticulous training of a multi-layer neural network for prediction. Explainability was also coupled with SHAP and Layer-wise Relevance Propagation for discovering the most influential genomic and clinical features that drive model decision-making. The results indicate the superiority of the joint model over the single modality model across all disease prediction tasks, with improved accuracy, precision, recall, and F1-scores. Feature importance analysis revealed important genomic variants and clinical predictors influencing predictions, enhancing model interpretability. These findings demonstrate the potential of explainable AI to integrate genomic and clinical data to support improved diagnosis, guide tailored therapies, and establish trust in AI-based clinical decision-making, resulting in real-world application in precision medicine.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1135Reinforcement Learning for Customer Lifetime Value Optimization: A Conceptual Framework and Directions for Future Research2026-01-16T07:10:30+00:00Hani Iwidathani.iwidat@pass.ps<p>As a technique for improving sequential decision-making in customer-centric marketing situations, reinforcement learning has attracted growing interest. With special attention on its concordance with customer lifetime value maximization, this paper investigates how reinforcement learning has been employed in client modeling, personalization, pricing, engagement, and retention. In this study, a conceptual research methodology of the theory synthesis type was employed beginning with reviewing databases from Google Scholar, Scopus, and Web of Science keywords such as 'reinforcement learning' AND 'customer lifetime value' (2015-2026), yielding more that 100 studies after screening Although current research shows great potential to affect long-term customer behavior, most uses depend on short-term or surrogate performance indicators rather than explicitly maximizing lifetime value. This study logically and theoretically combines previous studies to offer a conceptual framework connecting reinforcement learning to lifetime value optimization, presents a taxonomy of approaches and tasks, and highlights major obstacles, knowledge gaps, and future directions to help to overcome this restriction. The paper sees reinforcement learning as the basis for creating ethically, value-driven, scalable consumer intelligence systems.</p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informaticshttps://www.jcbi.org/index.php/Main/article/view/1203DenseNet-Based Detection of AI-Generated Driving Scene Images2026-01-12T16:03:23+00:00Dhairya Vyasdhairyavyas@live.comMilind Shahmilindshahcomputer@gmail.comKhushboo Trivedikhushboo.trivedi21305@paruluniversity.ac.inBhasha Anjariabhasha.anjaria14450@paruluniversity.ac.inBhumi Shahbhumi.shah19174@paruluniversity.ac.inSachin Patelsachink248@gmail.com<p>The proliferation of deepfake technologies poses a significant challenge to the integrity of image data used in autonomous driving systems, where the distinction between real and manipulated images is critical for safe and reliable operation. This study proposes a novel deepfake detection framework designed specifically for real and fake image classification in autonomous driving environments. The primary aim is to enhance the robustness of autonomous systems against adversarial manipulations by leveraging advanced deep learning techniques. The proposed model incorporates DenseNet blocks to efficiently extract hierarchical features from complex visual data, ensuring improved detection accuracy and computational efficiency. The methodology includes preprocessing the dataset, augmenting it to simulate real-world variations, and training the model on a diverse set of real and fake images. Experimental results demonstrate the efficacy of the proposed framework, achieving an impressive 98% classification accuracy, thereby underscoring its potential as a reliable solution for real-time deepfake detection in autonomous driving scenarios.</p> <p> </p>2026-03-01T00:00:00+00:00Copyright (c) 2026 Journal of Computing & Biomedical Informatics