Welcome to GRAPH-HOC 2024

16th International Conference on Applications of Graph Theory in Wireless Ad hoc Networks and Sensor Networks (GRAPH-HOC 2024)

July 27 ~ 28, 2024, London, United Kingdom



Accepted Papers
Information Extraction From Product Labels: a Machine Vision Approach

Hansi Seitaj and Vinayak Elangovan, Computer Science program, Penn State Abington, Abington, PA, USA

ABSTRACT

This research tackles the challenge of manual data extraction from product labels by employing a blend of computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition (OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food Facts API (Application Programming Interface) for database population and text-only label prediction. The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set. Importantly, our approach enables visually impaired individuals to access essential information on product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep learning and OCR in automating label extraction and recognition.

KEYWORDS

Optical Character Recognition (OCR); Machine Vision; Machine Learning; Convolutional Recurrent Neural Network (CRNN); Natural Language Processing (NLP); Text Recognition; Test Classification; Product Labels; Deep Learning; Data Extraction.


Unveiling the Value of User Reviews on Steam: a Predictive Modeling of User Engagement Approach Using Machine Learning

Leonardo Espinosa-Leal1, Mar´ıa Olmedilla2, Jose-Carlos Romero-Moreno3, and Zhen Li1, 1Arcada University of Applied Sciences, Graduate Studies and Research, Finland, 2SKEMA Business School – Universit´e Cˆote d’Azur, France, 3Applied Computational Social Sciences-Institute, University of Paris-Dauphine-PSL, France

ABSTRACT

In an era where user-generated content is both ubiquitous and influential, accurately evaluating videogame reviews’ relevance becomes critical. The vast digital domain of videogames brims with user feedback, presenting the challenge of distinguishing genuinely helpful reviews. Our study, analyzing over a million videogame reviews from the Steam platform, employs cutting-edge machine learning techniques to ascertain review helpfulness. We applied both regression and binary classification models, revealing the latter’s enhanced predictive prowess. Interestingly, our findings contradict the anticipated benefit of incorporating features from pre-trained NLP models into enhancing prediction accuracy. This investigation not only highlights methods for assessing review helpfulness effectively but also promotes the application of computational techniques for the insightful analysis of user-generated content. Furthermore, it provides valuable perspectives on the elements influencing user engagement and the intrinsic value of feedback within the context of videogame consumption, marking a significant contribution to understanding digital user interaction dynamics.

KEYWORDS

Videogames, helpfulness, machine learning, NLP, online reviews.


Predict the Consumer Price Index in Vietnam Using Long Short-term Memory (Lstm) Network Based on Cloud Computing

Pham Trong Huynh, University of Natural Resources and Environment Ho Chi Minh City, Viet Nam

ABSTRACT

In Vietnam, the Consumer Price Index (CPI) serves as a pivotal gauge for evaluating inflation, alongside the Gross Domestic Product (GDP) Index. CPI data not only assesses economic performance but also forecasts future inflation trends. This research endeavors to predict CPI utilizing Long Short-Term Memory networks (LSTMs), an advancement over Regression Neural Networks (RNNs). The model inputs basic price variables in Vietnam to forecast CPI values. To enhance prediction accuracy, various optimization algorithms were employed including Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMSProp), Adaptive Gradient (AdaGrad), Adaptive Moment (Adam), Adadelta, Nesterov Adam (Nadam), and Adamax. Results demonstrate Nadam's superiority with an achieved RMSE of 4.088. Although the model's accuracy falls short of expectations, potential enhancements include adjusting epoch numbers, hidden layers, batch sizes, and input variables. This study not only presents the model but also proposes an approach to CPI data regarding essential food prices in forecasting inflation rates.

KEYWORDS

LSTM, Machine Learning, CPI, Prediction, Namdam.


Navigating Efficiency: Proximal Policy Optimization for Efficient Product Transportation in Reinforcement Learning Environments

Asharful Islam and Chuan Li, Department of Computer Science, Sichuan University, Chengdu, China

ABSTRACT

Optimizing the delivery of products from a central depot to multiple retail locations presents a multifaceted challenge, especially when considering factors such as minimizing costs while ensuring product availability for customers. Traditional approaches to this problem often rely on heuristic methods or mathematical optimization techniques. However, these approaches may struggle to adapt to dynamic real-world scenarios with complex, evolving conditions. This study pioneers the application of Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning algorithm, to the domain of product transportation and inventory management. By creating a custom simulation environment, “ProductTransportEnv,” we delve into the complexities of supply chain logistics, demonstrating the significant potential of reinforcement learning to transform operational efficiencies. The “ProductTransportEnv” mimics real-world logistics scenarios, allowing for a detailed exploration of transportation routes, inventory levels, and demand fluctuations, providing a rigorous testing ground for the PPO algorithm.

KEYWORDS

Openai Gym-environment, “ProductTransportEve,” PPO, RL, DQN, Inventory management.


Semi-reward Function Problems in Reinforcement Learning

Dong-geon Lee and Hyeoncheol Kim, Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea

ABSTRACT

Applying reinforcement learning agents to the real-world is important. Designing the reward function has problems, especially when it needs to intricately reflect the real-world or requires burden human effort. Under such circumstances, we propose a semi-reward function. This system is intended that each agent can go toward an individual goal when a collective goal is not defined in advance. The semi-reward function, does not require sophisticated reward design, is defined by ‘not allowed actions’ in the environments without any information about the goal. A tutorial-based agent can sequentially determine actions based on its current state and individual goal. It can be learned through a semi-reward function and toward its own goal. For the combination of these two, we constructed training method to reach the goal. We demonstrate that agents trained in arbitrary environments could go toward it own goal even if they are given different goals in different environments.

KEYWORDS

Reinforcement Learning, Reward Function, Reward Engineering, Transformer-based Agent, Goal-based Agent.


Semi-reward Function Problems in Reinforcement Learning

Dong-geon Lee and Hyeoncheol Kim, Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea

ABSTRACT

Applying reinforcement learning agents to the real-world is important. Designing the reward function has problems, especially when it needs to intricately reflect the real-world or requires burden human effort. Under such circumstances, we propose a semi-reward function. This system is intended that each agent can go toward an individual goal when a collective goal is not defined in advance. The semi-reward function, does not require sophisticated reward design, is defined by ‘not allowed actions’ in the environments without any information about the goal. A tutorial-based agent can sequentially determine actions based on its current state and individual goal. It can be learned through a semi-reward function and toward its own goal. For the combination of these two, we constructed training method to reach the goal. We demonstrate that agents trained in arbitrary environments could go toward it own goal even if they are given different goals in different environments.

KEYWORDS

Reinforcement Learning, Reward Function, Reward Engineering, Transformer-based Agent, Goal-based Agent.


A Novel Framework for Monitoring Parkinsons Disease Progression Through Video Analysis and Machine Learning

Caroline Zhou1, Ivan Revilla2, 1The Harker School, 500 Saratoga Ave, San Jose, CA 95129, 2Computer Science Department, California State Polytechnic University, Pomona, CA91768

ABSTRACT

Parkinsons disease (PD) is a progressive neurological disorder that necessitates continuous and accurate monitoring for ef ective management [4]. We propose an innovative system that leverages video analysis and machine learning to predict clinical scores for PD patients. Our system includes a mobile application for recordingand uploading videos, a cloud-based server for processing the data, and a machine learning model for analyzingthevideos [5]. Key technologies employed include Flutter for the mobile app, Firebase for data storage andauthentication, and advanced machine learning models such as Bayesian Ridge and Random Forest regression[6]. Challenges such as variability in video quality and limited dataset diversity were addressed through robust preprocessing techniques and plans to expand the dataset to include more diverse participants. Our experiments demonstrated that Bayesian Ridge and Random Forest regression models achieved high prediction accuracy for clinical scores. The results highlight the systems potential for providing a reliable and user-friendly methodformonitoring PD. This comprehensive approach promises significant improvements in patient care and diseasemanagement, making it a valuable tool for both patients and healthcare providers.

KEYWORDS

Parkinsons Disease, Video Analysis, Machine Learning, Mobile Application.


A Smart Medications Recording and Medical Progress Tracking Mobile Platform using Artificial Intelligence and Machine Learning

Yujia Mao1, Khoa Tran2, 1Forest Ridge School of the Sacred Heart, 4800 139th Ave SE, Bellevue, WA 98006, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This application aims to provide users comprehensive information about different medications through a user-friendly and straightforward interface. The journey begins with a splash screen, followed by a login screen. Users can access detailed drug information, including names, images, and uses upon logging in. New users can sign up via Firebase Authentication. The application features an AI that analyzes user behavior and common medication usage patterns using TensorFlow and Python. The AI detects drug labels, saving the data on a server through Cloud Firebase for easy access. The app has also been tested and highlighted the app's effectiveness in providing accurate drug information and timely notifications across continents, with Asia (77%), Europe (82%), and the Americas (79%) in detecting drug information from pictures, with 99% with timely notifications. For this application, we took data reliability, usability, and AI accuracy into consideration. The application required ample reliable data, a user-friendly interface, and an accurate and precise AI.

KEYWORDS

AI, Medications tracking, Flutter, Medical information.


Review of IDS, ML and Deep Neural Network Technique in DDOS Attacks

Om Vasu Prakash Salmakayala, Saeed Shiry Ghidary, and Christopher Howard, School of Digital, Technology, Innovation and Business at Staffordshire University, Stoke on Trent, Staffordshire-ST4 2DE, United Kingdom

ABSTRACT

Intrusion Detection Systems (IDS) and firewalls often struggle to identify malicious packets, creating opportunities for threat actors to exploit vulnerabilities. Advanced tactics are used by threat actors to bypass these detection mechanisms. They employ evasion techniques, such as adjusting anomalies or thresholds in anomaly-based systems and injecting ambiguity into packet data, which confuses IDS and firewalls. Despite previous applications of machine learning (ML) in cybersecurity, challenges persist. This research aims to review traditional IDS failures and examine the evolution of ML and deep neural networks (DNN) from their basic functionalities to advanced mechanisms. This study also summarizes the types of ML and DNN, along with their techniques in various applications, both individually and in combination, with a focus on detecting ICMPv4/ICMPv6 DDoS attacks and the necessity of integrating both to mitigate such attacks.

KEYWORDS

AI, ML, DDOS-attack, DNN, ICMPv6.


Detection of ICMPv6 DDoS attacks using hybrid integration model RNN and GRU

Om Vasu Prakash Salmakayala, Saeed Shiry Ghidary, and Christopher Howard, School of Digital, Technology, Innovation and Business at Staffordshire University, Stoke on Trent, Staffordshire-ST4 2DE, United Kingdom

ABSTRACT

The internet, crucial for information exchange, operates on IPv6 and IPv4 protocols, which are vulnerable to DDoS attacks. Despite secure-edge advancements, these attacks still cause significant losses. This paper presents a Deep Neural Network (DNN) architecture to address these vulnerabilities. Model 1 integrates Recurrent Neural Networks (RNN) with Gated Recurrent Units (GRU), inspired by Ahmed Issa, while Model 2 employs Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM). These models were tested on Mendeley, NSL-KDD, and Sain Malaysian datasets, achieving accuracies of 80%, 80% 97.01%, 95.06%, 72.89%, and 64.94%, respectively. The objective is to verify the practical feasibility of these combinations to detect DDoS-attacks. The same architecture was implemented in Model 1 for further evaluation using NSL-KDD as used by Issa, Mendeley IPv4, and Sain Malaysian datasets. A new ICMPv6 datasets were deployed with different architecture layers on the proposed model resulting in promising accuracies of 99.36% and 94.48%.

KEYWORDS

DDoS attacks, IPv4, ICMPv6, IPv6, Deep Neural Networks (CNN, LSTM, RNN, GRU).


Enhancing Cost-effective License Plate Detection and Recognition on Low-compute Edge Devices Through Unified Modeling and Tensorrt Quantization

Sonu Kumar, Hassan Berry, Bahram Baloch, Ibrahim Chippa, and Abdul Muqsit Abbasi

ABSTRACT

In the realm of smart city and intelligent transportation systems, the efficient detection and recognition of license plates on low-compute edge devices present a significant challenge. Traditional high-compute infrastructures, while powerful, are not cost-effective nor scalable for widespread implementation. This research paper addresses this challenge by introducing a novel, unified model designed to optimize license plate detection and recognition on these resource-constrained devices. Our comprehensive approach includes data augmentation using core computer vision techniques and a custom YOLOv3 configuration tailored for this specific task. Key innovations in our methodology are the use of flipped and unflipped numbers in a dual-phase training regimen and the quantization of models using TensorRT. This enables efficient deployment on edge devices, overcoming the traditional tradeoffs between performance and computational demands. The results demonstrate that our model not only performs with high accuracy in detecting license plates and recognizing characters but also stands out in terms of cost-effectiveness and scalability. This positions our research at the forefront of ALPR technology, offering a practical, efficient solution for smart city and surveillance technologies.


Revolutionizing Utility Meter Reading in Developing Economies: a Computer Vision-powered Solution - a Case Study of Pakistan

Eman Ahmed, Ibrahim Chippa, Bahram Baloch, Hassan Berry, and Abdul Muqsit Abbasi

ABSTRACT

This research paper explores the modernization of meter reading processes in third-world countries, with a specific focus on Pakistan. Traditional manual meter reading practices in these regions are labor-intensive, error-prone, and time-consuming, leading to suboptimal utility management and financial losses. To address these challenges, our study introduces a digitalized meter reading system enhanced by computer vision and machine learning technologies. This system automates data collection, enables real-time monitoring, and employs data analytics to enhance accuracy and efficiency. By reducing human error and ensuring timely data transmission, this digitized assistant empowers utility providers to make informed decisions and optimize resource allocation. Using Pakistan as a case study, we evaluate the impact of the digitized meter reading assistant on operational efficiency, cost-effectiveness, and overall utility management. Through key performance indicators and case studies, we demonstrate how computer vision and machine learning can enhance service delivery, reduce financial losses, and promote sustainability in third-world economies. This research contributes to the discourse on technological interventions in developing countries by highlighting the potential of digitizing essential services like meter reading. The findings offer valuable insights for policymakers, utility providers, and researchers seeking innovative solutions to address operational challenges in similar socio-economic contexts.


A Smart Task Management and Schedule Suggestion Mobile Platform for Efficiency Improvement Using Artificial Intelligence and Machine Learning

Keith Cao1, Joshua Wu2, 1Issaquah High School, 700 2nd Ave SE, Issaquah, WA 98027, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Task management is a dilemma many people are plagued with in their daily lives. Ironically, this itself is a massive task for many, especially if they suffer from depression or other mental illnesses. In developing this app, we used ChatGPT 3.5-Turbo to receive a list of tasks from a user and return a completed schedule for them to follow. It contains a scheduling system that receives a token from the user and returns a token of its own in the form of JSON data that the program then parses and displays, a cloud storage that stores every user’s data individually and can be drawn on easily, and a tasks page where users may create new tasks and complete old ones with three different display settings relating to the range of time the tasks are in: Today, Week, and All. This application will enable users to save time by crafting a schedule for them, help improve the time management skills of users, and help them finish goals before a set deadline.

KEYWORDS

Artificial Intelligence, ChatGPT, OpenAI, Flutter.


An Intelligent Computer Application to Correct Posture in Exercises Using Human Motion Tracking

Zhiyao Zha1, Jonathan Thamrun2, 1Los Osos High School, 6001 Milliken Ave, Rancho Cucamonga, CA 91737, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Our solution addresses the widespread issue of inefficient posture during exercise, a common obstacle many individuals encounter, including myself and those around me. Proper posture during workouts is crucial as it not only enhances the effectiveness of the exercise but also minimizes the risk of injuries associated with incorrect form [1]. Ensuring optimal posture leads to a safer and more productive exercise experience, maximizing benefits and reducing the likelihood of harm over time. To tackle this challenge, the program employs a method of tracking the user's form across various exercises, providing tailored feedback for improvement [2]. This approach proves more effective than traditional methods such as referencing static pictures or videos, as it offers direct, personalized guidance that can adapt to the user’s specific needs and progress, thereby improving exercise quality and safety.

KEYWORDS

Intelligent, Computer, Posture, Exercises.


Two Rigorous Statistical Tools for Objective Analysis of Quantum Random Number Generation

Parthasarathy Srinivasan and Tapas Pramanik, Oracle Corporation USA

ABSTRACT

Quantum Random Number generation(Qrng) provides a superior alternative than classical Random Number Generation (Crng) and the two experiments outlined in this work provide validation of this premise. The first experiment utilizes Random Numbers generated using Qrng and CRng to provide data samples as input to an Evolutionary Algorithm (namely Differential Evolution) , which mutates and thresholds these samples using the known rastrigin and rosenbrock functions and evolves the solution pool towards convergence. Rigorous statistical analysis employing p-values is applied to the convergence data to prove that Qrng is indeed Qualitatively superior to Crng (Qrng surpasses Crng by a factor of 2). These results are complemented with yet another experiment wherein the Qrng and Crng samples are generated and statistically compared with, yet another tool namely bottleneck distance , which leads to a logical conclusion consistent with the one obtained in the first experiment (Qrng again surpasses Crng by the same factor of 2 in the range of statistical distances obtained from employing the two Rng methods).

KEYWORDS

Bottleneck distance, p-value, Evolutionary Algorithm, Takens Embedding.


A Computationally Empirical Adaptation of the Prony Method which Assures Consistent Higher Precision and Stability in the Reconstructed Output Signal Components

Parthasarathy Srinivasan and Tapas Pramanik, Oracle Corporation USA

ABSTRACT

The Prony method for approximating signals comprising sinusoidal/exponential components is known through the pioneering work of Prony in his seminal dissertation in the year 1795. However, the Prony method saw the light of real world application only upon the advent of the computational era, which made feasible the extensive numerical intricacies and labor which the method demands inherently. While scientific works (such as Total Least Squares method) exist , which focus on alleviating some of the problems arising due to computational imprecision, they do not provide a consistently assured level of highly precise results. This study improvises upon the Prony method by observing that a better (more precise) computational approximation can be obtained under the premise that adjustment can be made for computational error , in the autoregressive model setup in the initial step of the Prony computation itself. This adjustment is in proportion to the deviation of the coefficients in the same autoregressive model. The results obtained by this improvisation live up to the expectations of obtaining consistency and higher value in the precision of the output (recovered signal) approximations as shown in this current work.

KEYWORDS

Prony Method, Fourier Series, Auto Regression, imprecision.


Araspider: Democratizing Arabic-to-sql

Ahmed Heak, Youssef Mohamed, and Ahmed B. Zaky, Department of Computer Science, Egypt-Japan University of Science and Technology

ABSTRACT

This study presents AraSpider, the first Arabic version of the Spider dataset, aimed at improving natural language processing (NLP) in the Arabic-speaking community. Four multilingual translation models were tested for their effectiveness in translating English to Arabic. Additionally, two models were assessed for their ability to generate SQL queries from Arabic text. The results showed that using back translation significantly improved the performance of both ChatGPT 3.5 and SQLCoder models, which are considered top performers on the Spider dataset. Notably, ChatGPT 3.5 demonstrated high-quality translation, while SQLCoder excelled in text-to-SQL tasks. The study underscores the importance of incorporating contextual schema and employing back translation strategies to enhance model performance in Arabic NLP tasks. Moreover, the provision of detailed methodologies for reproducibility and translation of the dataset into other languages highlights the research's commitment to promoting transparency and collaborative knowledge sharing in the field. Overall, these contributions advance NLP research, empower Arabic-speaking researchers, and enrich the global discourse on language comprehension and database interrogation.

KEYWORDS

Semantic Parsing, SQL Generation, Text-to-SQL, Spider Dataset, Natural Language Processing.


Performance Evaluation of Large Language Model for Copy Number Variation Extraction From Medical Journal

Jongmun Choi, Department of Molecular Genetics and Artificial Intelligence Research Center, Seegene Medical Foundation, Seoul, South Korea

ABSTRACT

This study assesses the efficacy of using Large Language Models (LLMs), specifically GPT-4, for extracting Copy Number Variations (CNVs) from medical journal articles, a task critical for advancing genetic research and clinical decision-making. Copy Number Variations (CNVs) significantly contribute to genetic diversity and disease, yet their complexity and the variable nature of their genetic content pose challenges for interpretation in clinical genetics. Traditional methods for CNV data extraction from clinical journals have faced limitations in accuracy, partly due to the inherent complexity of genetic data. This paper evaluates an alternative approach using GPT-4, comparing its performance against CNV-ETLAI, a specialized NLP-based model designed for CNV extraction. Our methodology involved configuring GPT-4 to process and interpret medical journal PDFs, developing custom prompts for CNV information extraction, and benchmarking its performance using a dataset of 146 true positive CNVs. The results revealed that while GPT-4 shows promise, with commendable performance despite the lack of fine-tuning for medical document analysis, it significantly lags behind CNV-ETLAI, particularly in extracting information from tables—a crucial aspect of data interpretation in genomics. Despite GPT-4's lower accuracy, its potential for improvement and adaptability highlights the evolving capabilities of LLMs as valuable tools for medical data extraction. This study underscores the superiority of CNV-ETLAI in current clinical genetic settings while pointing towards the promising future of LLMs in enhancing the efficiency and breadth of medical data extraction across various applications.

KEYWORDS

Large Language Model, GPT-4, Copy Number Variation, Natural Language Processing, Text-mining, Genetic Interpretation.


A Smart Mobile Platform to Assist with Reading Comprehension using Machine Learning and Lexical Simplification

Jake Jin1, Yu Sun2 and Ang Li2, 1USA, 2California State Polytechnic University, USA

ABSTRACT

Our research tackles the pressing issue of making news articles accessible and understandable to diverse audiences, particularly those with low literacy levels or cognitive disabilities such as dyslexia or autism [1]. We introduce an innovative AI-driven news application that employs advanced text simplification techniques alongside dynamic user feedback loops to significantly enhance readability and comprehension. At the heart of our solution is the integration of cutting-edge natural language processing (NLP) and machine learning technologies, including BERT text simplification models for parsing and restructuring complex sentences, coupled with sentiment analysis to gauge the emotional tone of content [2][3]. Addressing challenges such as maintaining accuracy in text simplification and fine-tuning the user feedback mechanism were pivotal in our development process [4]. Through rigorous experimentation, including controlled tests and user trials, we observed marked improvements in the accessibility of news content, with enhanced readability scores and positive user feedback. Our application stands out by offering a scalable, user-centered approach to news consumption, adapting to individual preferences and reading abilities. This ensures a more inclusive, informed public discourse, making our app an indispensable resource for brididing the information divide and empowering all users to stay informed, regardless of their literacy level or cognitive capabilities.

KEYWORDS

App, Artificial intelligence, Reading, News, Simplification


Design and Implementation of a Stress-Relief Mobile Application: Utilizing OpenAI, Anonymous Chat, Gratefulness Lists, and Color Therapy to Reduce Suicide Rates

Xusheng Ou1 and Rodrigo Onate2, 1USA, 2California State Polytechnic University, USA

ABSTRACT

This app is made to help reduce the high percentage of suicide [1]. The whole app is designed for easy use and fast stress relief. The overall idea is to share the problems anonymously that are causing the stress; therefore the user wouldn't need to worry about any of the things being connected to the user's personal life. Some systems I implemented to achieve this are OpenAI, anonymous chat, gratefulness list, and colored theme [2]. By researching color combinations that could help the brain relieve stress, I was able to use the color set to design the overall app. I created ten prompts that could be the source of stress and tested what solution the chatbot replies. The most important result I found was that the chatbot's reply on a sense of understanding what the user is going through. My idea gives users a better experience because it is easy to use and provides more privacy since some sources of stress might be sensitive.

KEYWORDS

Suicide Prevention, Stress Relief, Anonymous Support, Color Therapy


Factors Influencing Attitude and Purchase Intention of Biodegradable Garbage Bag in Vietnam

DANG DUONG HUYEN THI ,National Kaohsiung University of Science and Technology Kaohsiung, Taiwan

ABSTRACT

Vietnam is experiencing rapid economic growth, but it also has serious environmental problems, most notably increased trash generation and plastic pollution. Biodegradable Garbage Bags have become a viable way to address this problem. But obstacles stand in the way of their widespread acceptance, especially in rural areas. This study explores the variables influencing Vietnamese consumers' opinions and intentions to buy biodegradable garbage bags. The research intends to direct strategies for encouraging sustainable consumption patterns and fostering environmental consciousness in the nation by examining consumer behavior and environmental awareness.

KEYWORDS

Biodegradable Garbage Bag, Sustainable Development, Environment Concern, Eco Friendly Products, Consumer Behavior, Consumer Perception, Market Research.


Autonomous Roadside Assistance Drones: Revolutionizing Vehicle Diagnostics and Maintenance Through Aerial Technology

Amneek Singh, Senior Software Development Engineer, Datasutram, Mumbai, India

ABSTRACT

This research explores the integration of autonomous drone technology with advanced diagnostic tools for real-time vehicle maintenance and emergency roadside assistance. Incorporating machine learning and computer vision, the system aims to dramatically improve response times, diagnostic accuracy, and service availability, especially in remote areas. Through comprehensive simulations and development of a robust software framework, this study demonstrates a scalable model for future enhancements in automotive service technologie.

KEYWORDS

Autonomous Drones, Vehicle Roadside Assistance, Machine Learning, Artificial Intelligence, IoT (Internet of Things)


Lightweight Dataset for Decoy Development to Improve Iot Security

David Weissman1 and Anura P. Jayasumana2, 1Department of Systems Engineering, Colorado State University, Fort Collins, CO, USA, 2Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO, USA

ABSTRACT

In this paper, the authors introduce a light-weight dataset to interpret IoT (Internet of Things) activity in preparation to create decoys by replicating known data traffic patterns. The dataset comprises different scenarios in a real network setting. This paper also surveys information related to other IoT datasets along with the characteristics that make our data valuable. Many of the datasets available are synthesized (simulated) or often address industrial applications, while the IoT dataset we present is based on likely smart home scenarios. Further, there are only a limited number of IoT datasets that contain both normal operation and attack scenarios. A discussion of the network configuration and the steps taken to prepare this dataset are presented as we prepare to create replicative patterns for decoy purposes. The dataset, which we refer to as IoT Flex Data, consists of four categories, namely, IoT benign idle, IoT benign active, IoT setup, and malicious (attack) traffic associating the IoT devices with the scenarios under consideration.

KEYWORDS

IoT Security, Device Decoys, Network Traffic Replication, IoT Datasets, Deception


Comprehensive Framework:biometric Data Security in Iotwearables

Nassma Zita1, Dr.Pinar Sarisaray Boluk2, 1Department of Computer Engineering, Bahçeşehir University, 34349 Istanbul, Turkey, 2Department of Software Engineering, Istanbul University, 34349 Istanbul, Turkey

ABSTRACT

Biometric data, which are physiological and behavioral characteristics, act as the basis for greatly increasing the level of usability and improving the convenience of IoT wearables, including fitness trackers and smartwatches. Although the collection and processing of such data also pose a number of serious security and privacy concerns. This work proposes a comprehensive framework, namely SecureBioIoT, to designed to mitigate the familiar challenges of biometric data security in IoT wearable devices. The proposed SecureBioIoT framework is based on some advanced methods and solutions which provide high levels of security in the full biometric data’s life cycle. Some of the security features are multi-modal biometric fusion authentication, security enhanced with continuous improvement measures, real-time monitoring and anomaly detection measures, compliance and governance measures, robust incident response and remediation processes, and user education and awareness initiatives. Based on a comprehensive analysis of existing research and a detailed examination of each component, this paper aims to advance the current discussion on securing the biometric data in wearables used for IoT, to open the path to a more secure and trusted environment.

KEYWORDS

Internet of Things, Biometric data, Secure Wearables, Security, Privacy.


An Insight Into the Immune System and Its Mathematical Models

Manfred, Ventspils Univ of Applied Sciences, Latvia

ABSTRACT

The article has two goals: to attract the interest of mathematicians to immunology and to look for ideas for Cyber Defense Systems considering the human immune system as a highly sophisticated defense system against any dangers. An overview of the role of lymphocytes in the immune system (IS), the main IS models, and a few tips for IS mathematical modeling are given. As a key future work melanoma is considered.

KEYWORDS

immune system, danger model, melanoma, cyber defense.


Demystifying Technology Adoption Through Implementation of a Multilevel Technology Acceptance Management Model

Gilbert Busolo, Lawrence Nderu and Kennedy Ogada, School of Computing and Information Technology, Department of Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya

ABSTRACT

Successful data driven decision making in any organization is anchored on the tenets of knowledge as a strategic resource. Successful adoption of a technological intervention to harness this pivotal resource is key. Institutions leverage on technology for prudent data management to drive knowledge management (KM) initiatives towards quality service delivery. These initiatives provide the overall strategy for managing data resources through making available knowledge organization tools and techniques while enabling regular updates. Some of the benefits derived from positive deployment of a technological intervention are competency enhancement through gained knowledge, raised quality of service and promotion of healthy development of an e-commerce operating environment. Timely, focused and successful adoption of technological interventions through which knowledge management initiatives are deployed remains a key challenge to many organizations. This paper proposes a multilevel technology acceptance management model. The proposed model takes into account human, technological and organizational variables, which exist in a deployment environment. To validate the model, a descriptive survey was conducted sampling ICT personnel in the Kenyan Public Sector. A regression analysis framework was adopted to determine the statistical relationship between the dependent (technology acceptance) and independent (human, technological and environmental) variables. Results indicate that technology acceptance in the Kenyan public sector is significantly predicted by human variables (p=.00<.05; LL=0.325; UL=0.416); technological variables (p=.00<.05; LL=0.259; UL=0.362) and environmental variables (p=.00<.05; LL=0.282; UL=0.402). Based on the findings, it is deduced that the proposed multilevel technology acceptance model is validated. The findings also provide sufficient evidence to reject the null hypothesis that the multilevel knowledge management acceptance model is insignificant to successful technological intervention implementation. The study therefore concludes that the multilevel knowledge management acceptance model is of crucial importance to successful technological intervention implementation. The study recommends a multilevel technology deployment process at 3 key levels. The first level ought to address any gaps in the identified human-related factors, while the second level in the deployment process involves providing an enabling environment for adoption of the intervention. The third level entails the actual deployment of the technological intervention with a focus on key features of the technologies involved. This model will be vital in driving early technology acceptance prediction and timely deployment of mitigation measures to deploy technological interventions successfully.

KEYWORDS

Technology Acceptance, Technology Adoption, Knowledge, Management Model, Multilevel, Technology Model.


Retac: Real Time Approach to Attack Covid-19 Virus World Wide Based on Barnes-hut Algorithm

Radhouane Boughammoura, Institut Supérieur d’Informatique de Mahdia, Université de Monastir, Tunisia

ABSTRACT

The aim of ReTAC (Real Time Attack of COVID-19 virus) is to prevent persons from possible COVID-19 contamination. In addition, ReTAC is able to detect dangerous situation which necessitates closing country’s frontier. Another relevant aspect of ReTAC is that the solution can be running world wide, i.e the algorithm’s scope is all around the world. And finally, our approach is based on Barnes-Hut , an algorithm which has complexity log(n). We believe that with ReTAC we can rapidly save lives. .

KEYWORDS

Artificial Intelligence, Social Distance,, COVID-19 Virus, Attack.


A Real-time Bike Training Simulation System to Enhance User Engagement and Performance Using Friction Generators, Firebase, and Unity

Ziwei Yang1, Tyler Boulom2, 1Beijing National Day School, No. 66 Yu Quan Road, Haidian District, Beijing, China 100039, 1Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Integrating physical exercise with digital entertainment presents unique challenges, particularly in accurately translating real-world cycling into virtual simulations [6]. This project aims to bridge this gap by using a bike friction generator, current sensor, and Adafruit ESP Feather microcontroller to capture real-time cycling data, which is then transmitted to Firebase and synchronized with a Unity-based simulation [7]. Key technologies include real-time data acquisition, transmission, and virtual simulation. Challenges such as data latency and hardware calibration were addressed through optimized protocols and robust calibration systems. Experimentation involved diverse scenarios, demonstrating high accuracy, minimal latency, and enhanced user engagement. The results indicate that our system provides an immersive and accessible fitness experience, making it a viable alternative to expensive specialized equipment. This innovative approach not only promotes physical activity but also offers an engaging and realistic training environment, highlighting its potential for broad application in fitness and entertainment sectors. .

KEYWORDS

User engagement, Unity, Real-Time Simulation, Interactive training systems.


Interpretable Deep Learning Architecture for Time Series Forecasting With Sentiment Analysis

Umar Mahmoodh1 and Ragu Sivaraman2, 1Computer Science and Engineering, University of Westminster, London, United Kingdom, 2Department of Computing, Informatics Institute of Technology, Colombo, Sri Lanka

ABSTRACT

Time series (TS) forecasting is a crucial area in various domains, and it is a widely researched topic during the recent past. Even though many notable research and innovations have been conducted in this area with the help of Machine Learning (ML), Deep Learning (DL), Natural Language Processing (NLP), these approaches frequently lack interpretability, making it challenging to comprehend the elements influencing predictions. Furthermore, adding sentiment analysis to forecasting models might yield insightful results; however, combining these features with interpretability requirements is still a challenge in the field of TS Forecasting. In order to address this gap, the thesis suggests a novel deep learning architecture for time series forecasting that makes use of sentiment analysis from stock news and integration of Explainable Artificial Intelligence(XAI) allowing for result interpretability. Modern natural language processing (NLP) approaches such as VADER and FinBERT are used in the suggested model to merge sentiment ratings derived from financial news with Long Short-Term Memory (LSTM) networks, Gated Recurrent Units (GRU), Convolutional Neural Networks (CNN), and Transformer models. The model's predictions were explained in plain and understandable terms using LIME (Local Interpretable Model-agnostic Explanations) to improve interpretability and compared with results from other XAI libraries like SHAP (SHapley Additive exPlanations). A combined dataset created by the author using historical stock prices and sentiment scores was used to assess the model using a variety of measures, such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE). The outcomes showed that prediction accuracy was greatly increased by combining sophisticated deep learning models with sentiment analysis. The Transformer and N-Beats models outperformed the other models in terms of capturing sentiment influences and temporal dependencies, as seen by their lowest MSE and MAE values. Furthermore, the application of XAI approaches improved the model's transparency and reliability by offering insightful information on the characteristics impacting the predictions. .

KEYWORDS

Time Series (TS) Forecasting, Explainable Artificial Intelligence (XAI), Sentiment Analysis.


Enhanced Multimedia Systems With Real-time Data Analytics and Automation

Partha Sarathi Samal, Paramount, Rocky Hill, Connecticut, USA

ABSTRACT

Integrating real-time data analytics and automation into multimedia systems greatly improves user experience and operational efficiency. This white paper explores the potential of these technologies, addressing current challenges, key technologies, applications, and future directions. By leveraging real-time data and automation, multimedia systems can provide seamless, high-quality content tailored to user preferences and network conditions. .

KEYWORDS

Real-Time Data Analytics, Multimedia Systems, Automation, Video Streaming, API Testing, Quality Assurance.


Security, Trust and Privacy Challenges in Ai-driven 6g Networks

Helena Rif`a-Pous1, Victor Garcia-Font2, Carlos N´ u˜nez-G´omez3, and Julian Salas4, 1Internet Interdisciplinary Institute (IN3, 2Universitat Oberta de Catalunya (UOC), 3Center for Cybersecurity Research of Catalonia (CYBERCAT), 4Barcelona, Spain

ABSTRACT

The advent of 6G networks promises unprecedented advancements in wireless communication, offering wider bandwidth and lower latency compared to its predecessors. This article explores the evolving infrastructure of 6G networks, em phasizing the transition towards a more disaggregated structure and the integration of artificial intelligence (AI) technologies. Furthermore, it explores the security, trust and privacy challenges and attacks in 6G networks, particularly those related to the use of AI. It presents a classification of network attacks stemming from its AI-centric architecture and explores technologies designed to detect or mitigate these emerging threats. The paper concludes by examining the implications and risks linked to the utilization of AI in ensuring a robust network.

KEYWORDS

6G, Security, Trust, Privacy, Threats, Attacks.


Development and Implementation of a Smart Pillbox System: Integrating ESP, Flask Server, and Flutterflow App for Enhanced Medication Adherence

Warren Zhang1, Soroush Mirzaee2, 1San Marino High School, 2701 Huntington Dr, San Marino, CA 91108, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

The issue I am aiming to solve is the timely intake of medication by individuals, which is crucial for their health and well-being [1]. I believe that the best way to solve such a problem is to create a device and app to remind the user to consume their pills at a time that they schedule. The different technologies and components of my project are the physical pillbox, ESP, flask server, and app. The ESP is a small computer with a screen that is placed inside the pillbox which has different functions such as pill reminder, which the user sets in how many hours they want to take their pill, pill count, where the user sets the number of each pill that they have to the corresponding pill [2]. The app was made through FlutterFlow and has a lot of different functionalities such as the creation of a pill schedule for each one of the user’s pills, a history page which displays the times and the pills that the user has consumed and has different unique functions in the settings page [3]. The flask server is being hosted through render and in the server it registers the user’s ID in the app to an ESP after the user has scanned the QR code that is displayed on the ESP, therefore, the information of the ESP and app are being shared [4]. There were a lot of technical difficulties in creating the pill schedule as it was an overall tedious process that involved multiple steps and functions. This problem was fixed by spending a lot of time working on it. It was also difficult to get the 3d printed pillbox to perfection as the ESP requires specific modeling in order to fit which required more than 20 prints of the pillbox. Overall, my product may not be flawless, but it covers all potential problems and human error that may occur.

KEYWORDS

Hardware, FlutterFlow, Server, Medicine.


Revolutionizing Requirements Elicitation: Deep Learning-based Classification of Functional and Non-functional Requirements

Grace Hanna, Nathan Boyar, Nathan Garay, and Mina Maleki

ABSTRACT

The requirements elicitation phase in the software development life cycle (SDLC) is both critical and challenging, especially in the context of big data and rapid technological advancement. Traditional approaches like workshops and prototyping, while useful, often struggle to keep pace with the massive data volumes and rapidly changing user demands characteristic of modern technology. This paper introduces a data-driven approach that utilizes deep learning (DL) and natural language processing (NLP) to enhance the requirements elicitation process by classifying requirements into functional and non-functional categories. Our research involves a deep neural network (DNN) trained on a large dataset of transcriptions from client/user stories. This DNN can identify whether a specific line represents a functional requirement, a non-functional requirement, or neither. Our approach shows a marked improvement over previous methods, with a 33% increase in accuracy and an 18% increase in the F1 score. These results demonstrate the enhanced capability of our method compared to existing approaches, indicating that deep learning techniques can play a vital role in this context. Index Terms—Requirements engineering, Requirements elicitation, Deep learning, Natural language processing.

KEYWORDS

Inventory system, predictive analytics, artificial intelligence, textile micro enterprises.


Status of Malaria in the African Continent - Data Mining Insights From Heterogeneous, but Interrelated Data Sources

Ken Muchira, Hemalatha Sabbineni, John Moses Bollarapu, and Kamrul Hasan

ABSTRACT

Malaria is a life-threatening mosquito-borne infectious disease caused by the Plasmodium parasites. African continent still suffers the most from this disease for many reasons. In this research, we have performed malaria data analysis using conventional data mining techniques for several African countries for the period 2000-2020. We were able to extract some key insights explaining the scenario for actionable insights. We were also able to make some concrete associations with finances and the malaria diagnostics methodologies, adopted and practiced by certain countries. Finally, we have made some concrete recommendations to combat malaria, to reduce infection and associated mortality rates.

KEYWORDS

Malaria, Anopheles Mosquitoes, Africa, World Health Organization (WHO), Gross Domestic Product (GDP), Microscopy Tests, Rapid Diagnostic Tests (RDTs).


Sodu2-net: a Novel Deep Learning-based Approach for Salient Object Detection Utilizing U-net

Hyder Abbas1, Shen Bing Ren2, Muhammad Asim3, Ahmed A. Abd El-Latif4, and Syeda Iqra Hassan4, 1School of Computer Science and Engineering, Central South University Changsha 410083, China, 2EIAS Data Science and BlockChain Labortary, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia, 3School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China, 4Department of Mathematics and Computer Science, Faculty of Science, Menouˆua University, Shebin El-Koom, Egypt

ABSTRACT

Detecting and segmenting salient objects from natural scenes, often referred to as salient object detection, has attracted great interest in computer vision. To address this challenge posed by complex backgrounds in salient object detection is crucial for advancing the field. This paper proposes a novel deep learning-based architecture called SODU2-NET (Salient object detection U2-Net) for salient object detection that utilizes the U-NET base structure. This model addresses a gap in previous work that focused primarily on complex backgrounds by employing a densely supervised encoder-decoder network. The proposed SODU2-NET employs sophisticated background subtraction techniques and utilizes advanced deep learning architectures that can discern relevant foreground information when dealing with complex backgrounds. Firstly, an enriched encoder block with FFF (Full Fusion Feature) with ASPP (Atrous Spatial Pyramid Pooling) varying dilation rates to efficiently capture multi-scale contextual information, improving salient object detection in complex backgrounds and reducing the loss of information during down-sampling. Secondly the block includes an attention module that refines the decoder, is constructed to enhances the detection of salient objects in complex backgrounds by selectively focusing attention on relevant features. This allows the model to reconstruct detailed and contextually relevant information, which is essential to determining salient objects accurately. Finally, the architecture has been improved by adding a residual block at the encoder end, which is responsible for both saliency prediction and map refinement. The proposed network is designed to learn the transformation between input images and ground truth, enabling accurate segmentation of salient object regions with clear borders and accurate prediction of fine structures. SODU2-NET is demonstrated to have superior performance in five public datasets, including DUTS, SOD, DUT OMRON, HKU-IS, PASCAL-S, and a new real world dataset, Changsha dataset. Based on a comparative assessment of the model FCN ,Squeeze-net ,Deep Lab, MaskR-CNN the proposed SODU2-NET is found and achieve an improvement of precision (6 %), recall (5%) and accuracy (3%) .Overall, approach shows promise for improving the accuracy and efficiency of salient object detection in a variety of settings.

KEYWORDS

SOD,image processing,computer vision,Salient Object Detection,Deep Learning UNet,Attention Mechanism..


Enhancing Sentiment Analysis for Low-resource Pashto Language: a Bert-infused Lstm Framework

Abdul Hamid Azizi1, Muhammad Asim2, and Mudasir Ahmad Wani3, 1School of Computer Science and Engineering, Central South University, Changsha 410083 P.R. China, 2EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia, 3School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China

ABSTRACT

Recent advancements in multi-modal learning have significantly enhanced face antispoofing systems. Despite these improvements, real-world applications often face the challenge of missing modalities from different imaging sensors. Previous studies have largely ignored this issue or have increased model complexity without effectively addressing it. This study presents a robust yet straightforward methodology utilizing a multi-modal face anti-spoofing architecture with spatial-temporal encoders and a dedicated fusion unit. The spatial-temporal encoders extract features from each modality using ResNet34 and Transformer architectures, while augmentation and regularization techniques further enhance model performance. Various fusion methods are assessed for their effectiveness in managing missing modalities. Additionally, we present FaceMAE, a modular autoencoder designed to predict and reconstruct missing-modalities. FaceMAE functions via a dual-phase process: encoding detected modalities to produce latent representations and subsequently decoding them to reconstruct missing modalities. Through the incorporation of transformer encoders and a flexible fusion module, FaceMAE enhances the ability to differentiate between live and spoof facial images. Evaluations on datasets such as CASIA-SURF, CASIA-SURF CeFA, and WMCA indicate that our method achieves competitive results.

KEYWORDS

Image processing, multi-modal learning, face anti-spoofing, missing modality scenarios, face attack detection, Data augmentation, spatial-temporal encoders.


An Optimized Ensemble Model With Advanced Feature Selection for Network Intrusion Detection

Afaq Ahmed1, Muhammad Asim2, Irshad Ullah1, Tahir Hussain1, Abdelhamied A.Ateya2, 3, 1School of Computer Science and Engineering, Central South University, Changsha, 410083, Hunan, China, 2EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia, 3Department of Electronics and Communications Engineering, Zagazig University, Zagazig, 44519, Egypt.

ABSTRACT

In today’s digital era, advancements in technology have led to unparalleled levels of connectivity, but have also brought forth a new wave of cyber threats. Network Intrusion Detection Systems (NIDS) are crucial for ensuring the security and integrity of networked systems by identifying and mitigating unauthorized access and malicious activities. Traditional machine learning techniques have been extensively employed for this purpose due to their high accuracy and low false alarm rates. However, these methods often fall short in detecting sophisticated and evolving threats, particularly those involving subtle variations or mutations of known attack patterns. To address this challenge, our study presents the ”Optimized Random Forest (OptForest),” an innovative ensemble model that combines Decision Forest approaches with Genetic Algorithms (GAs) for enhanced intrusion detection. The Genetic Algorithms based decision forest construction offers notable benefits by traversing a wider exploration space and mitigating the risk of becoming stuck in local optima, resulting in the discovery of more accurate and compact decision trees. Leveraging advanced feature selection techniques, including Best-First Search, Particle Swarm Optimization (PSO), Evolutionary Search, and Genetic Search (GA), along with contemporary dataset, this research aims to enhance the adaptability and resilience of NIDS against modern cyber threats. We conducted a comprehensive evaluation of the proposed approach against several well-known machine learning models, including AdaBoostM1 (AbM1), K-Nearest Neighbor (KNN), J48-Decision Tree (J48), Multilayer Perceptron (MLP), Stochastic Gradient Descent (SGD), Na¨ıve Bayes (NB), and Logistic Model Tree (LMT). The comparative analysis demonstrates the effectiveness and superiority of our method across various performance metrics, highlighting its potential to significantly enhance the capabilities of network intrusion detection systems.

KEYWORDS

Network Intrusion Detection Systems, Machine Learning, Ensemble Models, Cybersecurity, Feature Selection.


A Virtual Reality Training Simulation to Assist in High-fidelity Baseball Batting using Oculus Quest 2 and Unity Engine

Brian C. Xu1, Robert Gehr2, 1Flintridge Preparatory School, 4543 Crown Ave, La Canada Flintridge, CA 91011, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Often, people’s busy schedules and lack of equipment make it difficult for them to get solid baseball training hours [1]. This paper seeks to remedy this issue by investigating the efficacy of a virtual baseball training solution [2]. The solution proposed in this paper involves developing a virtual reality baseball simulation aimed to accurately simulate an environment where baseball hitting can be trained in a context that does not require access to expensive baseball equipment or a huge time commitment. The baseball training solution was successfully made in the Unity game engine and deployed to the virtual reality Meta Quest 2 platform. One primary feature of this solution is the pitching mechanism where pitches are thrown to the player accurately [3]. Another feature of this solution is the ability to translate the player’s movements in real life to the player’s movements in the simulation. The solution was tested in the following experiments: one to test the improvements of player’s skills, and one to test the entertainment levels of different age groups. After collecting data, we found that players did improve from our solution, and that little kids/older adults enjoyed our solution more than teens and those in their 20s. Based on the results of the data, we believe that further research in baseball training solutions that utilize virtual simulations would be worthwhile. Future methodologies may improve upon fidelity and accessibility.

KEYWORDS

VR, Baseball, Unity, Simulation.


Optimizing Intrusion Detection System Performance Through Synergistic

Hyperparameter Tuning and Advanced Data Processing, Samia Saidane DISI - University of Trento Trento, Italy,Francesco Telch Trentino Digitale Spa Trento, Italy,Kussai Shahin Trentino Digitale Spa Trento, Italy,Fabrizio Granelli DISI - University of Trento Trento, Italy

ABSTRACT

Intrusion detection is vital for securing computer networks against malicious activities. Traditional methods struggle to detect complex patterns and anomalies in network traffic effectively. To address this issue, We propose a system combining deep learning, data balancing (K-means + SMOTE), high-dimensional reduction (PCA and FCBF), and hyperparameter optimization (Extra Trees and BO-TPE) to enhance intrusion detection performance. By training on extensive datasets like CIC IDS 2018 and CIC IDS 2017, our models demonstrate robust performance and generalization. Notably, the ensemble model ”VGG19” consistently achieves remarkable accuracy (99.26% on CIC-IDS2017 and 99.22% on CSE-CIC-IDS2018), outperforming other models.

KEYWORDS

Imbalance Data Processing , Hyper parameter Optimization , Network Intrusion. Detection Systems, Deep Learning, Network Traffic Data, Netflow Data.


A Wysiwyg Document Editor to Solve the Issue of Writing Documents With Math Elements and Avoid the Steep Learning Curve of a Plain Latex Editor

Qinzhi Li1, Ang Li2, 1Arcadia High School, 180 Campus Drive, Arcadia, CA 91007, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This project addresses the issue of needing a fast and simple document editor that provides more functionality than Notepad and is less complicated than Microsoft Word, also supporting writing math equations with LaTeX [4]. Existing simple document structures like Markdown might have a learning curve for new users due to the lack of a WYSIWYG(what you see is what you get) interface, and documents written on pure LaTeX have an even steeper learning curve [6]. By breaking down content into different nodes in a tree structure, the editor enables efficient rendering and editing of different elements. Math elements are handled with an interface similar to Desmos [7]. A key challenge is to ensure synchronization between the internal JavaScript Object and the usermodified DOM [5].This is achieved through cursor tracking, data-id attributes, and a mutation observer. Robustness is also very important, and the implementation revolves around it. Another challenge is to handle edge cases and cross-browser compatibility. Tests such as editing long documents have been conducted. A custom solution like this handles the scenario of writing quick notes with more features than Notepad while maintaining simplicity and offering a WYSIWYG interface for both text and math equations.

KEYWORDS

Document Editor, Web, Latex, WYSIWYG.