CAINE 2023:Papers with Abstracts

Papers
Abstract. Heart disease is the second leading cause of death in Japan, contributing to 15% of all fatalities. Consequently, there exists a pressing need to unveil the pathogenesis of this disease and develop innovative treatment methods. Traditional analysis methods, however, require a significant amount of man-hours and execution time, while being limited in the amount of information they can obtain due to their reliance on two- dimensional images. In this study, we propose a classification method for three- dimensional quantitative analysis of myocardial tissue that considers the cell-specific features existing in the tissue. Using the proposed method, we extracted the nuclei, cells, and vascular endothelial cell membrane regions, and derived two quantitative indices: coverage and filling rate. The experimental results showed that the coverage rate, which quantifies the features of the nucleus of vascular endothelial cells covered by the vascular endothelial cell membrane, is effective in identifying vascular endothelial cells. In addition, the filling rate, which quantifies the characteristic of cardiomyocytes having a small percentage of nuclei in the total cell volume, has been shown to be effective in the identification of cardiomyocytes.
Abstract. The implementation of cybersecurity measures in the manufacturing industry is crucial as organizations increasingly adopt digital technologies and face escalating cyber threats. This research paper aims to identify the challenges associated with implementing effective cyber security in the manufacturing industry and utilizes the Average Analytic Hierarchy Process (AHP) to evaluate and prioritize these challenges.
This research contributes to the existing knowledge by addressing the research gap regarding cyber security challenges in the manufacturing industry. The findings offer practical guidance for manufacturing organizations seeking to enhance their cyber security posture, enabling them to safeguard critical assets, ensure uninterrupted production processes, and protect sensitive information.
Abstract. Software Process Improvement (SPI) aims to achieve quality in software products for software organizations, as it helps to manage and improve the development processes. The success of software products highly depends on the right execution of software processes. The current pandemic (COVID-19) has highly affected the workflow of software organizations around the distributed geographical locations, resulting in difficulties in process execution whichis a threat to software process improvement activity. The primary objective of this research is to provide a process improvement model for software development organizations for better management and improvement of the software development processes during the COVID-19 pandemic. Our proposed model is based on the objectives of the ‘Team Software Process’ (TSP)and ‘Personal Software Process’ (PSP) models to effectively manage the software development processes for both the teams and individuals involved in the remote development during the COVID-19 pandemic. The proposed model can also be applied in any uncertain situation other than COVID- 19 to assist software organizations during remote work.
Abstract. This paper describes a simulated audio dataset of spoken words which accommodate microphone array design for training and evaluating keywords spotting systems. With this dataset you could train a neural network for the detection direction of the speaker. Which is an advanced version of the original, with added noises during a speech in random locations and different rooms with different reverb. Hence it should be closer to real-world long-range applications. This task could be a new challenge for the direction of arrival activated by keyword spotting systems. Let’s call this task KWDOA. This dataset could serve as the intro level for microphone array designs.
Abstract. This paper introduces an EXplainable Learning Analytic Dashboard (EX-LAD) that presents learning analytics data on student performance, engagement, and perseverance in a clear and easily understandable manner. The main goal of this study is to make this information accessible to both teachers and students, who may not possess extensive knowledge in data analysis, and demonstrate the effectiveness of the relationship between performance, engagement, and perseverance in identifying student difficulties. This dashboard enables teachers to gain valuable information about their student’s progress, identify at-risk learners, and provide targeted support. Similarly, students can use this dashboard to track their own learning journey, identify their strengths and weaknesses, and make informed decisions to improve their academic performance. It integrates visualizations to represent various aspects of student learning, such as performance, engagement, and perseverance. To demonstrate the effectiveness of our dashboard, we conducted a case study using real data collected from ESIEE-IT, an engineering school in France, during the academic year 2021-2022. This case study serves as concrete evidence of the impact and values our dashboard brings to the educational context.
Abstract. As digital commerce settles among consumers, the need to capture these users grows. The demand for techniques, tools and strategies grows as digital sales channels multiply and as consumers continue to adopt new online shopping habits. This is why ecommerce specialists today position themselves as the most demanded profiles in the commercial area. Due to the high growth rate of e-commerce globally, companies that did not have knowledge about this marketing channel had to help themselves to continue their business competitively.
For this purpose, a quality model of workflows, metrics and indicators is proposed based on quality standards and information collected from the software industry and computer services in the region, emphasizing an integration architecture with other systems to concretize the digital transformation of companies. companies.
Abstract. Exponential-time algorithms for solving intractable problems are inefficient compared to polynomial-time algorithms for solving tractable problems as execution time for former grows rapidly as problem size increases. A problem is NP-complete when a problem is non-deterministic polynomial (NP) and all other NP-problems are polynomial-time reducible to it. The partition problem is one of the simplest NP- complete problems. Many real-life applications can be modeled as NP-complete problems and it is important for software developers to understand the limitations of existing algorithms that can solve those problems. Solving the partition problem is a time consuming endeavor. Exact algorithms can find solutions, in a reasonable amount of time, only for small instances of these problems. Large instances of NP-hard problems will take so long to solve with exact algorithms, that for practical purposes those large instances should be considered intractable. The execution time required to find a solution to instances of the partition problem is greatly reduced using a Field Programmable Gate Array (FPGA). In this paper, we talk about the use of the PYNQ board in conjunction with an overlay to accelerate the execution of a function that evaluates if a partition is a solution to an instance of the partition problem. In order to assist with the evaluation, four different overlays are created and performance comparison among them using native python is then presented in the paper.
Abstract. Due to the growing demands on big data, data centers are expanding their systems to improve their capacity and capability. Common expansion techniques are: adding more memories, increasing the storage capacity, and attaching hardware accelerator devices. Benefit aside, such expansion puts a higher demand on the host system due to the increasing amount of data movements. It quickly consumes the available system bandwidth and sets a heavy burden on data transfers among the devices. To ease this situation, recent advanced solutions have appeared to optimize the data flow. These include peer-to-peer data transfer, allowing direct device-to-device data exchange without involving host memory. This paper evaluates the system performance for peer-to-peer (P2P) data transfer among connected devices. The results show that P2P data transfer between two devices in the system is 2x - 6x faster than non-P2P cases via bypassing the host memory. Not only does it reduce the memory occupancy but also the system power cost.
Abstract. The “curse of dimensionality” in machine learning refers to the increasing data training requirements for features collected from high-dimensional spaces. Researchers generally use one of several dimensionality reduction methods to visualize data and estimate data trends. Feature engineering and selection minimize dimensionality and optimize algorithms. Di- mensionality must be matched to the data to preserve information. This paper compares the final model evaluation dimensionality reduction methods. First, encode the data set in a smaller dimension to avoid the curse of dimensionality and train the model with a manageable number of features.
Abstract. This paper presents a cryptographic solution for establishing trust in peer-to-peer (P2P) networks, addressing issues of privacy, performance, and anonymity. Our protocol utilizes Zero-Knowledge Proofs (ZKP) for continuous trust validation during data transfers. This procedure compels each node to continually demonstrate its integrity, significantly decreasing the potential for network at- tacks. Upon evaluation, the protocol proved to be highly scalable and efficient, expanding network reach without requiring additional control messages. This result validates the protocol’s robustness, suggesting its potential use in larger and more intricate P2P network architectures.
Abstract. Tensor decomposition techniques have gained significant attention in cancer research due to their ability to unravel complex and high-dimensional data structures. In this study, we comprehensively review the research trends from 2013 to 2023. Several themes are dis- cussed, including the problems and challenges regarding cancer datasets, specifically image data and omics data. We also explore proposed tensor decomposition algorithms to tackle these challenges and their applications in different types of cancer, as well as the limita- tions and shortcomings of this field, which call for further research and development. Our objective is to investigate the application of tensor decomposition methods in cancer re- search. We first introduce the concept of tensors as multidimensional arrays and highlight their relevance in modeling cancer data. Subsequently, we discuss various tensor decom- position algorithms, such as Tucker decomposition and Canonical Polyadic decomposition, along with their advantages and limitations. This review aims to assist researchers inter- ested in tensor decomposition techniques, which offer a valuable tool for analyzing complex and heterogeneous cancer data, enabling the discovery of hidden patterns and providing biological insights.