https://www.jcbi.org/index.php/Main/issue/feed Journal of Computing & Biomedical Informatics 2025-04-19T18:50:06+00:00 Journal of Computing & Biomedical Informatics editor@jcbi.org Open Journal Systems <p style="text-align: justify;"><strong>Journal of Computing &amp; Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing &amp; Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p> https://www.jcbi.org/index.php/Main/article/view/881 Unveiling 6G Networks: Innovations, Challenges, and Future Research 2025-03-04T03:30:52+00:00 Waqas Ahmad waqas.ahmad@kc.au.edu.pk Uzma Batool uzma.batool@kc.au.edu.pk Muhammad Kashif Aslam engr.kashif@upr.edu.pk Waseem Younas waseem.younas@kc.au.edu.pk <p>With the advent of fifth-generation (5G) wireless communication technology, several intelligent applications are being integrated into various domains. However, 5G specifications fall short of meeting the demands of emerging technologies such as connected autonomous vehicles, artificial intelligence (AI) / cloud integration, Smart Grid 2.0, collaborative robots, Industry 5.0, digital twins, extended reality, and hyper-intelligent healthcare. These cutting-edge applications require enhanced technical capabilities, including higher data rates, greater network capacity, ultra-low latency, improved reliability, efficient resource allocation, expanded bandwidth, and optimal energy efficiency per bit. Since existing 5G technology does not fully address these evolving requirements, research and development efforts must pivot toward sixth-generation (6G) wireless technologies to bridge the existing gap. This study comprehensively unveils 6G innovations, exploring its key technological advancements, including ultra-low latency, enhanced data rates, and improved energy efficiency. Additionally, it identifies critical challenges such as security attacks, data privacy risks. The study also highlights potential research directions to address these challenges and ensure the successful deployment of 6G.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/796 Personality Prediction of the Users Based on Tweets through Machine Learning Techniques 2025-01-03T07:48:00+00:00 Shiza Aslam shizaaslam84@gmail.com Muhammad Usman Javeed usmanjavveed@gmail.com Shafqat Maria Aslam shafqatmaria34@gmail.com Muhammad Munwar Iqbal munwariq@gmail.com Hasnat Ahmad hasnat.ahmad@uettaxila.edu.pk Anees Tariq anees.tariq@szabist-isb.edu.pk <p>With the advancement of interpersonal organizations, the massive rich data from Social platforms such as Facebook, YouTube, Instagram, and Twitter supply determining information about Social communications and human manners. An assortment of approaches has been created to characterize clients' characters dependent on their social exercises and language use propensities. Specific methodologies vary to various AI calculations, information sources, and capabilities. The Informal community application records the enormous measure of users' conduct communicated in different exercises like preferences, notices, posts, remarks, photographs, labels, historical textual features, tweets, user profiles, and offers. Different analysts use numerous old Machine Learning calculations in establishing their models. This examination attempts to execute a few extreme learning designs to see the correlation by exhaustive investigation strategy through the exact results. We inspect the presence of plans of interpersonal organizations and semantic attributes comparative with character connections utilizing the myPersonality project dataset. The investigation also looks at two AI models and plays out the link between every one of the feature sets and personality traits. The outcomes for the forecast exactness show that regardless of whether tried under a similar dataset, the character expectation framework based on the Logistic Regression classifier outflanks the standard for all the included sets, prediction accuracy of 98.9%. The best prediction accuracy of 99.8% is gained by using the Random Forest classifier.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/848 Blockchain in the Digital Age: Challenges, Opportunities, and Future Trends 2025-02-07T09:57:52+00:00 Khushbu Khalid Butt irshadsomra88@gmail.com Muhammad Yousif irshadahmed@lgu.edu.pk Irshad Ahmed Sumra irshadahmed@lgu.edu.pk Abubakar Qazi irshadahmed@lgu.edu.pk Sajid Khan irshadahmed@lgu.edu.pk Muhammad Amjad khan irshadahmed@lgu.edu.pk <p>Blockchain technology offers a massive network with built-in security features that encompass cryptography, decentralization, and consensus, which foster trust in transactions. IoT further looks as an emerging in finance and security that are the application of blockchain. The basic need for every blockchain consumer is the first to prioritize data confidentiality, integrity, and availability. In the era of 2025, trust is a necessary part of security for third parties, which handle the privileges of private and public. The mentioned advantages and disadvantages motivated us to provide an advancement and comprehensive study regarding the applicability of blockchain technology. This paper focuses on blockchain security issues for blockchain and sorts out the security risks in six layers of blockchain technology by comparing and analyzing existing security measures. The text also investigates and describes various security threats and obstacles associated with implementing blockchain technology, fostering theoretical inquiry and the creation of strong security protocols in current and forthcoming distributed work settings.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/863 AI-Enhanced Bioactive 3D-Printed Scaffolds for Tissue Regeneration: Innovations in Healing and Functional Additives 2025-02-14T10:28:09+00:00 Adam Rafiq Jeraj adamrafiq218@gmail.com Zulekha Zameer 235680770@formanite.fccollege.edu.pk <p>Bioactive 3D-printed scaffolds have revolutionized tissue engineering and regenerative medicine by enabling precise fabrication of biomimetic structures that promote cell adhesion, proliferation, and differentiation. However, significant challenges remain, particularly in optimizing scaffold composition, bioactive additive integration, and long-term stability for clinical applications. This review provides a systematic analysis of recent advancements in AI-driven bioactive scaffolds and their role in personalized regenerative medicine. A comparative evaluation of major 3D printing techniques—Fused Deposition Modeling (FDM), Stereolithography (SLA), Selective Laser Sintering (SLS), and Direct Metal Laser Sintering (DMLS)—is presented, focusing on resolution, material compatibility, and bioactive additive incorporation. Additionally, we analyze key bioactive agents (growth factors, nanoparticles, peptides, and natural polymers) and their effects on biocompatibility, mechanical strength, and therapeutic efficacy. AI-powered optimization techniques, including machine learning-based scaffold design, computational modeling, and predictive analytics, are emerging as transformative solutions for improving scaffold architecture, drug delivery systems, and patient-specific applications. Despite significant progress, major challenges persist, including standardization in scaffold fabrication, long-term in vivo validation, and regulatory approval hurdles. Addressing these scientific and regulatory challenges is essential for the successful clinical translation of bioactive scaffolds. This review highlights the need for interdisciplinary collaboration to advance AI-assisted scaffold engineering and establish personalized treatment strategies for next-generation regenerative medicine.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/914 Optimized Skin Cancer Detection through Dermoscopic Imaging Using EfficientNetB4 Architecture 2025-03-19T18:15:27+00:00 Aymen husainzulfiqar60@gmail.com Muhammad Suleman csit.suleman@uos.edu.pk Hafiz Muhammad Faisal Shehzad muhammad.faisal@uos.edu.pk Samreen Razzaq samreen.razzaq@uos.edu.pk Anam Safdar Awan nargesshahbaz20137@gmail.com Narges Shahbaz nargesshahbaz20137@gmail.com <p>Background<strong>: </strong>Skin cancer classification is a challenging task due to the fine-grained diversity in the appearance of various diagnostic categories. Detecting skin cancer at an early stage is vital for enhancing patient outcomes, as the prognosis for this condition greatly improves when diagnosed early. Convolutional neural networks have been found to be more effective than dermatologists in classifying multiclass skin cancer. Problem<strong>: </strong>The identification of skin cancer is frequently impeded by the subjective analysis of dermoscopic images, resulting in misdiagnoses and delayed treatments. The objective of this study is to create a reliable and effective classification system using the efficientnetb4 model, which will aid in early detection and ultimately enhance patient outcomes. Objective<strong>:</strong> The main goal of this study is to create a highly efficient and accurate classification system for skin cancer using the efficientnetb4 model. The goal of this system is to improve the accuracy of diagnoses, minimize misdiagnoses, and enable early detection of skin lesions, leading to better patient outcomes and a more efficient diagnostic process in dermatology. Methods: The EfficientNetB4 model is trained on the HAM10000 dataset using transfer learning and fine-tuning techniques on rotated images, zoomed in and out, and even flipped over to make variations. Then, it adjusted the hyperparameters in the fine-tuning step to fine-tune its weights so that the model could fit the classification task for skin lesions more precisely. Results<strong>: </strong>The leading model, EfficientNetB4, achieved a Top-1 Accuracy of 89.22%, a Top-2 accuracy of 88.82%, and a top-3 accuracy of 88.62%. Precision, recall, and F1 scores are computed for each class. This model has demonstrated excellent performance in melanoma (MEL) and benign kurtosis-like lesions (BKL). Criteria considering high-class imbalance were used in the assessment of Efficient Net classifiers. Models with an intermediate level of complexity, such as EfficientNetB4, demonstrated the most optimal performance. Confusion matrices were also discovered to be useful in identifying skin cancer varieties with the greatest capacity for generalization. Conclusion<strong>:</strong> Overall, EfficientNetB4 demonstrated superior performance in classifying multi-class skin cancer. Further development would be oversampling or synthetic data generation for even more class-balancing techniques to improve performance over underrepresented classes. More medical data, including images and clinical data, will probably increase the overall diagnostic accuracy.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/861 A Comprehensive Review on the Role of AI in Phishing Detection Mechanisms 2025-02-13T12:08:26+00:00 Sajjad Ahmed irshadsomra88@gmail.com Irshad Ahmed Sumra irshadahmed@lgu.edu.pk Ijaz Khan irshadahmed@lgu.edu.pk Hadi Abdullah irshadahmed@lgu.edu.pk <p>The integration of Artificial Intelligence (AI) in various domains, such as healthcare, education, business, and generative applications, is thoroughly reviewed in this paper emphasizing the transformative and detective power of AI. The operational efficiency of the applications has been greatly increased by AI-driven innovations like automation, personalized healthcare solutions, and improved decision-making optimization. In addition to providing a critical analysis of AI's advantages and disadvantages, the review synthesizes findings from several studies and addresses ethical concerns, data privacy issues, bias, and security risks related with generative AI technologies. The implications of AI-driven disinformation and the changing regulatory environment are two promising new insights this paper offers. Even with significant advancements, there are still issues that need to be addressed, especially with regard to managing the social effects of AI, creating uniform ethical frameworks, and guaranteeing data quality. This study adds to the body of literature by highlighting the necessity of strong mitigation techniques, interdisciplinary cooperation, and ethical AI governance to promote responsible AI deployment. Recommendations for future work, including the need to develop robust strategies of alleviation, ethical instructions and interdisciplinary cooperation to ensure that AI technology is successfully implemented.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/925 Advanced Collaborative Robotics: Enhancing Industrial Cobots with Conversational Interaction and Computer Vision 2025-03-24T10:51:22+00:00 Muhammad Imran Akhtar Mobileapp690@gmail.com Shahid Ameer shahidameer.khan@gmail.com Narges Shahbaz nargesshahbaz20137@gmail.com Ayesha Mumtaz ayeshanouman031@gmail.com Sehar Gul sehar.gul@iba-suk.edu.pk Hssan Nawaz nargesshahbaz20137@gmail.com <p>Industry 4.0 has revolutionized modern manufacturing by integrating advanced automation, data exchange, and smart technologies. At the forefront of this transformation are collaborative robots (cobots), which have become indispensable components of smart manufacturing systems due to their flexibility and safety features. This study introduces a novel approach that enhances cobot functionality by merging conversational AI and computer vision, enabling adaptive human-robot interaction and dynamic task execution in industrial environments. The proposed system leverages a transformer-based conversational module designed to facilitate natural language communication between human operators and cobots. This module allows workers to issue commands, receive feedback, and seamlessly coordinate complex tasks without requiring specialized programming skills. In parallel, a YOLOv5-based vision module is integrated for real-time object detection and defect identification, significantly improving situational awareness and task precision. Comprehensive evaluations demonstrate that the integrated system achieves a 33% reduction in task completion time compared to conventional cobot setups. Additionally, the object detection model attains a precision of 96.2% and a recall of 93.5%, ensuring reliable and accurate identification of objects and potential defects. These advancements significantly enhance task efficiency, accuracy, and overall usability, surpassing the current state of the art. Furthermore, the system's conversational capabilities empower operators to adjust processes dynamically, minimizing downtime and improving workflow management. The combination of conversational interaction and computer vision not only augments operational performance but also fosters a more intuitive and human-centric collaborative environment. This research highlights the transformative potential of integrating conversational AI and computer vision into cobots, paving the way for next-generation collaborative robotics within Industry 4.0 applications. However, challenges remain in optimizing contextual understanding and refining vision algorithms to further boost performance and adaptability. Future work will focus on enhancing the cobot's ability to process complex contextual cues and improving the robustness of object recognition under variable conditions. By addressing these challenges, the proposed system will better meet the demands of dynamic industrial settings and continue to shape the evolution of human-robot collaboration.</p> <p> </p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/947 A Framework for Sarcasm Detection Incorporating Roman Sindhi and Roman Urdu Scripts in Multilingual Dataset Analysis 2025-04-18T19:15:02+00:00 Majdah Alvi majida.alvi@iub.edu.pk Muhammad Bux Alvi mbalvi@iub.edu.pk Noor Fatima noorfatima440428@gmail.com <p>Sarcasm detection is imperative for successful real-time sentiment analysis in the pervasive social web. Detection of sarcastic tones expressed through text that impart bitter, satirical, or mockery expressions, remarks, or derision in Natural Language Processing (NLP) is problematic to handle for humans; making it automated is even more arduous. This work aims to propose a sarcasm detection framework tflexihat optimizes a sentiment analysis system by correctly detecting sarcastic text messages for resource-poor languages in multilingual datasets. The techniques developed to date are inadequate and require precise training data. Therefore, we propose neural networks and deep learning-based models that focus on contextual information utilizing different word embedding techniques, and we further propose a framework for multilingual sarcasm detection resources for low-resource languages such as Roman Sindhi and Roman Urdu. With this sarcasm-aware framework, individuals with limited English proficiency will be better equipped to engage on social media using sarcastic tones, emojis, and creative linguistic variations in a multilingual textual data analysis.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/883 Decision Making Framework for Phishing Incident Response Using Intuitionistic Fuzzy Trapezoidal Preference Relations 2025-03-10T08:11:05+00:00 Muhammad Touqeer touqeer.fareed@uettaxila.edu.pk Syeda Sadia Gilani ssgilani.97@gmail.com Adeel Ahmed adeel.ahmed@szabist-isb.edu.pk Awais Mahmood awais.mahmood@szabist-isb.edu.pk Muhammad Munwar Iqbal munwariq@gmail.com <p> Decision-making often involves uncertainty, requiring precise methodologies to ensure accuracy and reliability. This paper presents an intuitionistic fuzzy trapezoidal preference relation (IFTrPR)-based decision-making framework that integrates multiplicative consistency for improved priority weight assessment. The proposed approach determines intuitionistic fuzzy trapezoidal priority weight vectors and ranks alternatives using the technique for order preference by similarity to the ideal solution (TOPSIS). To enhance the consistency of priority weights, a Linear Decision Model (LDM) is employed, effectively capturing decision-makers’ perceptions. Additionally, Model 1 is introduced to compute priority weights based on intuitionistic fuzzy trapezoidal numbers (IFTNs) across various alternatives. The integration of fuzzy logic and optimization techniques strengthens the framework’s ability to handle complex decision-making problems. A comparative analysis with hierarchical fuzzy systems (HFS) demonstrates that the proposed method enhances accuracy and reliability in priority weight assessment. Furthermore, the study provides a systematic approach to handling linguistic variables in decision-making, particularly in the representation of membership (MS) and non-membership.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/801 Enhanced Deep Learning Based X-Ray Analysis for COVID-19 Identification 2025-01-21T13:14:59+00:00 Hamza Iftikhar 21-CS-44@students.uettaxila.edu.pk Anees Tariq anees.tariq@szabist-isb.edu.pk Awais Mahmood awais.mehmood@szabist-isb.edu.pk Muhammad Munwar Iqbal munwariq@gmail.com Noor-ul-Ain Yousaf 21-CS-105@students.uettaxila.edu.pk <p>The rapid and accurate detection of COVID-19 is critical for mitigating its transmission and ensuring timely medical intervention. This research enhances COVID-19 detection by implementing the Artificial Neural Networks Algorithms. Our research paper embeds the concept of a Convolutional Neural Network for efficient and accurate detection of COVID-19 by taking x-rayed images of the lungs of patients as input. The proposed model development involves systematic steps, including data acquisition, preprocessing, and augmentation, as well as the application of a convolutional neural network to the prepared data. The dataset utilized for this research paper on COVID-19 detection using Artificial Neural Network (ANN) is obtained from the source of the website Kaggle. The dataset consists of three classes: normal, viral, and COVID-19 affected. The visual data of these three classes is utilized to train and test the model. PCR testing is the most used technique for COVID-19 detection, but this technique is pricey for people who belong to middle- or lower-class families, so our research paper overcomes this financial barrier by using X-ray images of the patient to detect whether the patient is infected with COVID-19 or not. Accurate identification of COVID-19 cases is vital for controlling its transmission. Minimizing false negatives ensures timely care for infected individuals, reducing spread. Our proposed model achieves an accuracy of 95% by using multiple layer Convolutional Neural Network.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/946 Effective Study of Design Perception of Learning Artifacts for Online-Learning Systems 2025-04-07T08:15:59+00:00 Noman Ali Aslam Nomanaliaslam@outlook.com Muhammad Zubair Tahir Mzubair122@gmail.com Rabbia Alamdar rabbia.alamdar0@gmail.com Muhammad Zahid Hussain m.zahidhussain@ucp.edu.pk Salman Akram salmanakram.pak@gmail.com Mohsin Shahzad mohsin.s122@gmail.com Muhammad Daniyal Butt Danialbutt79@gmail.com <p>There are several different artifacts for designing a website and they play a very prominent role in the aesthetics and inducement of any website. So, the website must look attractive and appealing to the eyes of its users. This research work is basically about revealing the aspects that directly or indirectly affect the user’s cognitive beliefs, emotions, and feelings while using the website, whether the website’s attributes lead to positive emotions such as satisfaction, ease/convenience to use, and appealing or the negative emotions such as frustration, irritation, satisfaction, or annoyance. The actual focus is on web-based learning applications to make them less frustrating and more appealing, especially for the students as well as teachers. For that, two learning websites are taken “Coursera and Edx” and analyzed, the analysis is based on a survey after letting the participants use these websites to get their cognition towards these websites, and after that user’s perception/preference was conducted by finding the relationship between the attributes of both of the websites. Some artifacts that mainly tried to hit are aesthetic aspects (color, typography) and information architecture (content quality, navigation, interactivity).</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/882 Unveiling Data Scientist Salaries: Predictive Modeling for Compensation Trends 2025-03-04T06:03:11+00:00 Muhammad Taha tahaishaq61@gmail.com Tayyaba Farhat tayyaba.farhat@superior.edu.pk Arsham Azam arshamrana36@gmail.com Muhammad Ahmar ahmerrandhawa24@gmail.com Syed Jalal Abbas jalalabbas2939@gmail.com Muhammad Umar Habib muhammadumarhabib@gmail.com <p>The fast pace of artificial intelligence growth with big data has rendered data science as one of the most in-demand jobs in the world. Data scientists' remuneration structures, though, demonstrate significant heterogeneity by region, industry, and experience, thereby making career advancement difficult for both new and old entrants. Current studies tend to be based on small data samples or basic statistical techniques, hence neglecting the intricacies of determining the determinants of salaries. This study aims to utilize advanced machine learning techniques, including decision trees, ensemble techniques, and eXtreme Gradient Boosting (XGBoost), to build an inferential model of classifying and estimating data science salaries using important determinants such as experience, location, firm size, and job. The model suggested in this study achieves accuracy of 92.3% according to a Random Forest algorithm, which is higher compared to conventional regression-based techniques. Feature importance analysis reveals that experience accounts for 45.7% of salary variation, followed by firm size (22.8%) and location (18.6%). By being data-driven, this research gives practical suggestions to job seekers, organizations, and policymakers, hence allowing them to make informed workforce planning, salary negotiation, and talent acquisition decisions. The study contributes to the body of knowledge by improving the precision of salary classifications, determinants identification, and the usefulness of predictive analytics in labor market trend analysis.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/797 Advance Assessment and Counting of Ripe Cherry Tomato’s Via Yolo Model 2025-01-10T10:26:30+00:00 Minhaj Naseem minhajnaseem183@gmail.com Bushra Ahmad minhaajnaseem183@gmail.com Abdul Razzaq minhaajnaseem183@gmail.com Salman Qadri minhaajnaseem183@gmail.com Sami Ullah minhaajnaseem183@gmail.com Shafqat Saeed minhaajnaseem183@gmail.com Ahsan Jameel minhaajnaseem183@gmail.com <p>The use of intensive labor measurement and the computer-based techniques for the gathering of phenotypic information in the laboratories has been in trend previously. This study focuses the detection and counting of the cherry fruit during the growth of the plant in the greenhouse. It's unique because it uses the deep learning method for the imaging of fruit instead of the classical computer techniques of imaging. The Yolo method imaging has been used to detect the different steps of the growth of the cherry fruit in the greenhouse rather than in the laboratory. This is an advance method which closely detects the object and each pixel of the object; so that the results of this study are comparatively better than the earlier works with the classical methods. The use of Yolo method in this study is rather an innovative step in the field of agriculture which will be helpful in future not only in the collection of phenotypic information, but it will also be useful in the automation of the processes such as harvesting. The results attained have successfully evaluated the detection of cherry tomatoes along with counting clearly. After obtaining the results have 92.6% precision rate and 94.7% recall rate in case of detection, counting and assessment (Ripeness and unripens). Overall study can help to manage the better yield and better quality product after valuable assessment of cherry tomatoes via algorithms.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/911 IIOT: An Infusion of Embedded Systems, TinyML, and Federated Learning in Industrial IoT 2025-03-18T18:29:23+00:00 Muhammad Abubakar mabubakarqazi@gmail.com Adbul Sattar drabdulsattar@lgu.edu.pk Hamid Manzoor hamidmanzoor.cs@mul.edu.pk Khola Farooq kholafarooq@lgu.edu.pk Muhammad Yousif myousif.cs@mul.edu.pk <p>With the revolution of Industrial 5.0 the system was modified to smart manufacturing. This Industrial 5.0 is emerging with different technologies such as IoT which provide real-time monitoring, analysis, and data fetching. For the novelty in II0T Application, this article investigates the combination of embedded systems, Tiny Machine Learning (TinyML), and Federated Learning (FL). Data privacy is ensured by Federated Learning (FL), and local data processing becomes efficient through Tiny Machine Learning (TinyML). This infusion promises to decrease latency, increase productivity, and improve data security. As previously unsolvable issues or problems are being addressed with renewed enthusiasm, new paradigms for development and research are needed. The goal of this article is to provide a platform and overcome the knowledge gaps for future revolutionary research projects that will leverage the growing trends of embedded devices influenced by compressed artificial intelligence (AI) models [18]. Moreover, will discuss about the TinyML, federated learning (FL) that permits the models to be trained locally on edge devices by utilizing their data as well as reducing the requirement for centralized data accumulation that may be even impossible in fewer Internet of Things (IoT) situations. It charts the development of embedded devices and wireless communication technologies, demonstrating the advent of Internet of Things applications across a spectrum of industries. In addition, the paper conducts a thorough cutting-edge technology to find recent works that use TinyML models to readily available embedded devices, and talks about recent research trends.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/786 Advanced Next-Word Prediction: Leveraging Text Generation with LSTM Model 2024-12-11T21:05:13+00:00 Syed Hasham Hameed 21-CS-90@students.uettaxila.edu.pk Muhammad Munwar Iqbal munwar.iq@uettaxila.edu.pk Hasnat Ahmed hasnat.ahmad@uettaxila.edu.pk Wahab Ali wahabali1389@gmail.com Saqib Majeed saqib@uaar.edu.pk Malik Muhammad Ibrahim 21-CS-69@students.uettaxila.edu.pk <p>Natural Language Processing (NLP) increasingly relies on machine learning to make better predictions of sequential text. This work focuses on the application of Long Short-Term Memory Networks, a variant of Recurrent Neural Networks that is specialized for modeling long-term dependencies. Traditional RNNs leave much to be desired in predicting sequences that contain repeated patterns or contextual dependencies. The research uses “The Adventures of Sherlock Holmes” as the training dataset and applies TensorFlow and Keras frameworks for implementation. The major preprocessing steps included word tokenization, n-gram creation, and one-hot encoding to prepare the dataset for modeling. The LSTM model was trained over 100 epochs to optimize prediction capabilities. Through this work, we show that LSTM is effective in next-word prediction and can potentially improve the performance and practicality of language models for real-world applications. The model achieved a commendable accuracy of 87.6%, demonstrating its effectiveness.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/852 AI-Driven Predictive Threat Detection and Cyber Risk Mitigation: A Survey 2025-02-07T10:46:41+00:00 Muhammad Sarfraz irshadsomra88@gmail.com Irshad Ahmed Sumra irshadahmed@lgu.edu.pk Benish Khalid irshadahmed@lgu.edu.pk Ezzah Fatima irshadahmed@lgu.edu.pk <p>Predictive analytics is revolutionizing cybersecurity and various industries by leveraging artificial intelligence (AI) and machine learning (ML) to enhance threat detection, risk mitigation, and decision-making processes. By enabling a shift from reactive to proactive security strategies, AI-driven predictive models improve the accuracy of cyber threat detection, reduce response times, and strengthen overall resilience against evolving attack vectors. Advanced techniques such as deep learning, anomaly detection, and natural language processing (NLP) enhance the adaptability and precision of these systems. A comprehensive review of existing research highlights key advancements, challenges including data integrity, algorithmic bias, and scalability and ethical concerns related to privacy, fairness, and transparency. Beyond cybersecurity, predictive analytics optimizes efficiency across sectors such as healthcare, finance, manufacturing, and energy, supporting smarter resource allocation and operational improvements. The integration of emerging technologies, including quantum computing, federated learning, and blockchain, further enhances predictive capabilities while ensuring security and compliance. By addressing these aspects, this research provides valuable insights to advance AI-driven predictive analytics, guiding the development of intelligent, ethical, and scalable solutions for a rapidly evolving digital landscape.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/857 Incremental Learning with Self-Organizing Bayesian Adaptive Incremental Network (SOBAIN) 2025-02-17T08:26:27+00:00 Talha Ishaq talhaishaq@ucp.edu.pk Rabia Tehseen rabia.tehseen@ucp.edu.pk Uzma Omer uzma.omer@ue.edu.pk Anam Mustaqeem anam.mustaqeem@ucp.edu.pk Rubab Javaid rubabjavaid@ucp.edu.pk Maham Mehr maham.mehr@ucp.edu.pk Madiha Yousaf madiha.yousaf@ucp.edu.pk <p>Neural networks and artificial intelligence have revolutionized how machines learn by mimicking aspects of human cognition. One key area in this field is the ability of agents to understand and imitate actions involving various objects, allowing them to pick up new skills by observing others. Understanding and imitating how others interact with different objects has become a big topic since it allows learning new skills by simply watching others. By constantly updating their own knowledge, lifelong learners can build on what they know over time. But the environments that artificial agents deal with are very different. Existing models designed for “lifelong learning” typically work with simplified experiments and datasets made up of static images, which limits their effectiveness for real-world applications. In this study, we propose a developmental model focused on how agents can learn about objects and actions through sensorimotor feedback, enabling humanoid robots to mimic actions more naturally. Our approach, "SOBAIN," works in three stages: neuron activation, neuron matching, and neuron learning. First, neurons activate based on specific traits that determine if they should "fire" or not. Then, we match the best neuron by comparing activation levels. At each learning point, new neurons connect to the network to match learned data. The final stage uses this network to refine and apply learned information, helping address memory loss issues in lifelong learning. By using these techniques, SOBAIN helps robots adapt their upper body movements in line with the sensor's feedback, creating a chain of neurons that builds over time as the robot learns new actions.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/949 Textile Defect Detection in the textile industry using Deep Learning 2025-04-19T18:50:06+00:00 Tanzeela Kiran tanzeela.kiran666@gmail.com Mudasir Ali mudasiralics786@gmail.com Umair Ismail umairhammad911@gmail.com Muhammad Altaf Ahmad muhammadaltaf.ahmad@iub.edu.pk Urooj Akram urooj.akram@iub.edu.pk Wajahat Hussain faheem.mushtaq@iub.edu.pk Muhammad Faheem Mushtaq faheem.mushtaq@iub.edu.pk <p>In order to maintain high production standards, the quality control process in textile manufacturing mostly depends on the efficient detection of fabric flaws. Conventional defect monitoring techniques are labor-intensive, manual and prone to human mistake, which results in inconsistent quality. In this research, the hybrid deep learning model is proposed using convolutional neural networks and gated recurrent unit networks for textile defect detection in the textile industry. The goal is to increase this crucial process's precision, effectiveness, and dependability. Since fabric defect identification is a crucial step in quality control, it is one of the manual operations that has gradually been automated using the aforementioned techniques. The performance evaluations were conducted on proposed model and compare with other models including Convolutional Neural Networks (CNNs), Artificial Neural Networks (ANN), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM).&nbsp; CNN ability to extract features and GRU skill at sequential learning credited with this improved performance, which allow the model to successfully capture temporal and spatial relationships. The proposed hybrid model which combines CNN + Gated Recurrent Unit (GRU) are performed well as compare to other models and achieved the accuracy of 0.9841. The findings shows that CNN is a strong option for the given problem as it improves classification accuracy when combined with GRU.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/860 A Survey: Enhancing Wireless Security in the Digital Age 2025-02-13T12:05:38+00:00 Zain Iqbal irshadsomra88@gmail.com Irshad Ahmed Sumra irshadahmed@lgu.edu.pk Hadi Abdullah irshadahmed@lgu.edu.pk Ijaz Khan irshadahmed@lgu.edu.pk <p>In today’s globalized society, protecting wireless networks from unauthorized users is fundamental. Wireless networks and mobile devices are under constant attack, far more than their wired equivalents, which has made achieving complete protection an unfulfilled goal, despite decades of research. This study considers critical matters such as authentication, confidentiality, integrity, and availability, which are important in the security of wireless networks. We study existing protocols and recommend changes to improve their security. An innovation consists of a new strategic approach to security risk management integrating a data authentication and integration model with machine learning methods. This also provides an examination of the current responses to wireless network security problems, highlighting their advantages and drawbacks in light of constant changes to cyber threats, especially phishing attacks. The objective is to accentuate the need to strengthen wireless networks security to safeguard confidential information from unauthorized access. Additionally, the article provides an in-depth review of methods to address security vulnerabilities associated with wireless networks.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/889 Software Development Empowered and Secured by Integrating A DevSecOps Design 2025-03-11T15:32:17+00:00 Samavia Riaz samaviariaz77@gmail.com Ayyan Asif ayyanasif07@gmail.com Younus Khan Ynyuskhan464@gmail.com Muhammad Ibrar Mibrar@live.nmhu.edu Saira Afzal sairaafzal322@gmail.com Khalid Hamid khalid6140@gmail.com Sehar Gul sehar.gul@iba-suk.edu.pk Muhammad Waseem Iqbal waseem.iqbal@superior.edu.pk <p>This made the development of software grow fast, injecting speed and agility in the processes of delivery of software, but integrating security into these high-speed environments has remained a challenge. The solution to this problem comes through the adoption of a methodology known as DevSecOps, encompassing security at each step in the lifecycle of software development. It explored the adoption and value of DevSecOps, concentrating more on automation, vulnerability detection, and continuous security testing. It outlines a comprehensive review of available literature on the topic, with a special focus on the leading tools in this list, namely Static Application Security Testing (SAST), Software Composition Analysis (SCA), and Dynamic Application Security Testing (DAST). The paper will go on to discuss real examples of DevSecOps implementation and follow that up with a discussion of emerging trends, such as machine learning, cloud-native security, and zero-trust models. The study depicts the fact that, though DevSecOps has not matured as a concept yet, its adoption is at a very critical phase in building secure, efficient, and resilient software systems.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/915 Jellyfish Algorithm for Feature Selection: Improving Machine Learning-Based Heart Disease Prediction 2025-03-20T14:51:18+00:00 Usman Humayun usmanhumayun@bzu.edu.pk Muhammad Irfan muhammad.irfan1@cs.uol.edu.pk Saima Bibi saima.bibi@cs.uol.edu.pk Khadija Bibi khadijaqadir54@gmail.com Hasham Shokat hasham30194@gmail.com <p>This research paper describes a sophisticated medical diagnosis system based on machine learning (ML) to predict heart disease. The Jellyfish algorithm optimizes the Cleveland dataset, which aims to achieve the most accurate predictions with the most significant features chosen. The selection of features is crucial for performance as there can be excessive features causing overfitting or too few features causing accuracy loss. The jellyfish technique is type of swarm metaheuristic approach in which the feature selection is optimized and performance of the model is enhanced. After selecting features, we will train four machine learning algorithms on the optimized dataset and program. The algorithms are Artificial Neural Networks (ANN), Decision Tree (DT), AdaBoost, and Support Vector Machines (SVM) Results show that every single model benefits from feature selection. The feature selection mostly impacts the Support Vector Machine model which sees the highest increase from 98.09% to 98.47%. Every performance metric has a respective enhancement in Sensitivity, Specificity and Area under the Curve (AUC) which shows that Jellyfish can enhance the accuracy of heart diseases prediction.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2025 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/884 An Ensemble Approach for Firewall Log Classification using Stacked Machine Learning Models 2025-03-08T12:02:20+00:00 Mudasir Ali mudasiralics786@gmail.com Muhammad Faheem Mushtaq faheem.mushtaq@iub.edu.pk Urooj Akram urooj.akram@iub.edu.pk Shabana Ramzan shabana@gscwu.edu.pk Saba Tahir saba.tahir@iub.edu.pk Muhammad Ahsan sarfraz.hashim@mnsuam.edu.pk <p>Firewall logs are still challenging to evaluate, while being important data sources. Machine Learning has become a popular technology for creating strong security measures because of their ability to react quickly to complicated attacks. Firewall logs generate high-volume, complex, and often imbalanced data, where malicious activities are rare compared to normal traffic. The challenge is further compounded by the dynamic nature of cyber threats and the presence of noise or redundant information in the logs. In this research, a stacking classifier called Decision Tree Classifier + Bagging Classifier (DB) for Firewall logs is proposed using the ensemble machine learning models. A comparison is performed to evaluate the classifier's overall performance based on F1-score, accuracy, precision, and recall. A firewall that was set up with Snort and TWIDS had its logs taken. The 65532 occurrences of the receiving log record include a total of 12 attributes. Creating multi-class machine learning models that can analyze the firewall logs dataset and classify the necessary actions in response to learned classes as "Reset-both," "Allow," "Deny," or "Drop". For assessment, a variety of machine learning methods have been used, such as Random Forest, K-Nearest Neighbor, Logistic Regression, and AdaBoost Classifier. The experiment's 99.89% accuracy rate for the proposed model using stacking classifier DB is an interesting interpretation of the findings. However, the high accuracy rates produced as compared to other algorithms show that the recommended points were crucial in increasing the firewall classification rate.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/793 Multispectral Approach for Wheat Yield Estimation Using Deep Learning 2024-12-30T13:11:28+00:00 Muhammad Rashid 2022-uam-2888@mnsuam.edu.pk Salman Qadri salman.qadri@mnsuam.edu.pk Abdul Razzaq abdul.razzaq@mnsuam.edu.pk Sarfraz Hashim sarfraz.hashim@mnsuam.edu.pk Ali Hamza ali.hamza@mnsuam.edu.pk Muhammad Habib-ur-Rehman habib.rahman@mnsuam.edu.pk <p>Automation is becoming increasingly vital across various professions and domains, including agricultural practices. Remote-sensing-based wheat yield estimation has emerged as a superior alternative to traditional yield prediction methods. Historically, wheat yield measurement involved labor-intensive and time-consuming destructive sampling techniques. However, accurate and timely yield forecasts are pivotal for decision-making processes such as crop harvesting plans, milling, marketing, and forward selling strategies, thereby enhancing the efficiency and profitability of the global wheat sector. Presently, producers or productivity officers, often funded by mills, rely on destructive or visual sampling techniques to assess wheat production during the growing season. There's a growing demand for swift and efficient problem-solving methods. Consequently, the adoption of machinery for wheat cultivation has surged, aiming to lower production costs, reduce labor demands on farmers, and enhance harvest efficiency. Although not extensively compared, existing techniques for estimating agricultural output typically employ regression models relying on specific forecasting factors. This study aims to illustrate and compare the effectiveness of utilizing satellite earth observation data for monitoring agriculture, particularly in wheat production. Multiple regression models are compared, utilizing various predictor variables. The study incorporates wheat yield estimation techniques, such as regression models, time series analysis of vegetation indices, remote sensing, phenology measurements, and normalized difference vegetation index (NDVI). Artificial intelligence algorithms, including Random Forest and ordinary least squares, are employed to develop a suggested approach that accurately correlates ground-measured data. This research introduces a novel wheat yield estimation technique, which significantly improves forecasting accuracy and holds promise for enhancing decision-making processes in wheat farming practices.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics https://www.jcbi.org/index.php/Main/article/view/912 Ethical Considerations in Utilizing Machine Learning for Depression and Anxiety Detection in College Students 2025-03-18T18:41:30+00:00 Shoaib Saleem shoaibsaleem.cs@mul.edu.pk Muhammad Yousif myousif.cs@mul.edu.pk Muhammad Abubakar mabubakarqazi@gmail.com Faiza Rehman faizanaseer.cs@mul.edu.pk Saima Yousaf saimayousaf.csit@mul.edu.pk <p>Depression and Anxiety are two of the most common mental disorders that are happening nowadays worldwide. This systematic review paper investigates the application of neural network-based machine learning techniques in assessing, identifying, and diagnosing depression and anxiety, which are two worldwide mental health problems. These approaches can range from many different types of neural network architectures like classical supervised learning methods to unsupervised approaches with recent deep models; if a systematic literature review of studies conducted in the last five years is carried out using databases like Science Direct and PubMed, these approaches can show promising results in tasks like clinical data analysis, biomarker identification, and personalized treatment plan creation. This makes it possible to explain the 91.08% accuracy ratio that he reported in his study, which gave rise to a comprehensive confusion matrix and analysis of the classification report for the employed neural network model (Level 6). There are still difficulties in correctly identifying cases of depression (class 1) despite relatively high recall and precision in non-depressive cases (class 0). This raises possible areas for improvement, particularly with the current class imbalances. This review paper can serve as a valuable source of information for all interested in neural network-based machine learning and how it may be used to address depression and anxiety, thereby providing insight into the future that will likely change mental treatment models.</p> 2025-03-01T00:00:00+00:00 Copyright (c) 2024 Journal of Computing & Biomedical Informatics