keyboard_arrow_up
Accepted Papers
Model-based Systems Engineering Approach with SYSML for an Automatic Flight Control System

Haluk Altay and M. Furkan Solmazgül, Teknopark Istanbul, Turkish Aerospace, Istanbul, Turkey

ABSTRACT

Systems engineering is the most important branch of engineering in interdisciplinary study. Successfully performing a multidisciplinary complex system is one of the most challenging tasks of systems engineering. Multidisciplinary study brings problems such as defining complex systems, ensuring communication between stakeholders, and common language among different design teams. In solving such problems, traditional systems engineering approach cannot provide an efficient solution. In this paper, a model-based systems engineering approach is applied with a case study and the approach isfound to be more efficient. In the case study, the design of the helicopter automatic flight control system was realized by applying model-based design processes with integration of tools. Requirement management, system architecture management and model-based systems engineering processes are explained and applied of the case study. Finally, model-based systems engineering approach is proven to be effective compared with the traditional systems engineering methods for complex systems in aviation and defence industries.

KEYWORDS

Model-Based Systems Engineering, Automatic Flight Control System, SysML


Research on Traffic Data Recovery Based on Tensor Filling and Tensor Matrix Association Analysis

Shengbao Yang, Lianjie Li and SanfengZhang, Yunnan University, kunming, China

ABSTRACT

Traffic data is the data foundation for smart transportation construction. However, due to inclement weather and equipment damage, there are often data missing during the collection of traffic data, which severely restricts smart transportation construction progress. Therefore, traffic data recovery has become an urgent problem in the field of intelligent transportation. This paper proposes a technology based on Tensor Completion and Coupled Matrix and Tensor Factorizations (CMTF); aiming at the problems in traffic data recovery, it focuses on the study of traffic data recovery models suitable for extreme data loss situations. Aiming at the problem that the recovery accuracy of existing traffic data recovery methods declines sharply under extreme missing conditions, this paper proposes a traffic data recovery model based on multi-source data association analysis, combined with real taxi GPS positioning data and point of interesting (POI) data verified. The experimental results show that the traffic data recovery model proposed in this paper can significantly improve the recovery accuracy of missing data and maintain good stability in the case of extreme data missing.


Cloud-based Privacy Preserving Top-k Subgraph Querying on Large Graphs

Jianwen Zhao and Ada Wai-chee Fu, Department of Computer Science and Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong

ABSTRACT

Subgraph isomorphism search is an important problem in graph data management. Due to its computational hardness, recent studies consider cloud computing while incorporating data anonymization for privacy protection. The state-of-the-art solution provides a framework but targets at the enumeration of all the querying results, which can be prohibitively expensive. In this work, we study the problem of privacy-preserving cloud-based diversified top-k subgraph querying, that is, given a query graph Q and a number k, we aim to retrieve k isomorphic subgraphs from the given data graph G such that together they cover as many distinct vertices as possible. We show that the state-of-the-art solution cannot address top-k query and we propose (1) a new graph anonymization technique equipped with a novel densest-block based vertex mapping method and a simple and effective label generalization method; (2) an iterative querying method that involves low communication overhead. Our extensive experiments on real-life datasets verify the efficiency and the effectiveness of the proposed methods, which significantly outperform the baselines.

KEYWORDS

Privacy Preservation, Subgraph Isomorphism, Top-k Querying, Results Diversity.


Appraisal study of similarity-based and embedding-based link prediction methods on graphs

Md Kamrul Islam, Sabeur Aridhi and Malika Smail-Tabbone, Universite de Lorraine, CNRS, Inria, LORIA, 54000 Nancy, France

ABSTRACT

The task of inferring missing links or predicting future ones in a graph based on its current structure is referred to as link prediction. Link prediction methods that are based on pairwise node similarity are well-established approaches in the literature and show good prediction performance in many real-world graphs though they are heuristic. On the other hand, graph embedding approaches learn low-dimensional representation of nodes in graph and are capable of capturing inherent graph features, and thus support the subsequent link prediction task in graph. This appraisal paper studies a selection of methods from both categories on several benchmark (homogeneous) graphs with different properties from various domains. Beyond the intra and inter category comparison of the performances of the methods our aim is also to uncover interesting connections between GNN-based methods and heuristic ones as a means to alleviate the black-box well-known limitation.

KEYWORDS

Link Prediction, Graph Neural Network, Homogeneous Graph & Node Embedding.


Secure Protocol for four D2D scenarios

Hoda Nematy, Malek-Ashtar University of Technology, Shabanlou, Babaee Hwy, Lavizan, Tehran, Iran

ABSTRACT

D2D is a new form of communication for reducing cellular traffic and increasing the efficiency of the cellular network. This form of communication has introduced for 4th cellular communication and certainly has a big role in the 5th generation. Four D2D communication scenarios defined in references, Direct D2D and relaying D2D communication both with and without cellular infrastructure. One of the major challenges for D2D protocols is to have one single secure protocol that can adapt to four scenarios. In this paper, WE propose a Secure D2D protocol based on ARIADNE with TESLA and LTE-A AKA protocol. WE use LTE-A AKA protocol for authentication and key agreement betwe en Source and Destination and use TESLA for broadcast authentication betwe en relaying nodes. Based on the results, our proposed protocols have a few computation overheads compare to recent works and have less communication overhead than SODE with preserve many security properties such as Authentication, Authorization, Confidentiality, Integrity, Secure Key Agreement, Secure Routing Transmission…. WE check Authentication, Confidentiality, Reachability and Secure Key Agreement of the proposed protocol with ProVerif verification tools.

KEYWORDS

5th generation, Four D2D scenarios, LTE-A AKA protocol, secure D2D protocol, ProVerif.


Optimal Detection Technique for Primary User Emulator in Cognitive Radio Network

Grace Olaleru, Henry Ohize, Abubakar Saddiq Mohammed, Department of Electrical and Electronics Engineering, Federal University of Technology, Minna, Nigeria

ABSTRACT

The primary user emulation attack (PUEA) is one of the most common attacks faced by the Cognitive Radio Networks (CRNs). In this attack, a malicious user transmits a signal similar with the real primary user’s (PU) signal to cause the legitimate secondary users (SUs) to leave the available channel while the PU is absent hence, detecting this attacker is vital in building a real CRN. In this paper, the PUEA is detected based on the Time Difference of Arrival localization technique using a Modified Particle Swarm Optimization algorithm. This technique is capable of efficiently detecting the PUEA when located anywhere within the CRN. The performance of the developed technique was evaluated using the Mean Square Error and cumulative distribution frequency and the results were compared with the Standard PSO via simulation in MATLAB. Simulation results showed that the MPSO performed better than the SPSO.

KEYWORDS

Cognitive Radio Network, Primary User Emulation Attack, Time Difference of Arrival, Modified Particle Swarm Optimization, Standard Particle Swarm Optimization.


PQEM: Product Quality Evaluation Method

Mariana Falco1 and Gabriela Robiolo2, 1LIDTUA/CONICET, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina, 2LIDTUA, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina

ABSTRACT

Project managers and leaders need to view and understand the entire picture of the development process; and also, comprehend the product quality level, in a synthetic and intuitive way to facilitate the decision of accepting or rejecting each iteration within the software life cycle. This article presents a novel solution called Product Quality Evaluation Method (PQEM) to evaluate the quality characteristics of each iteration from a software product, using the Goal-Question-Metric approach, the ISO/IEC 25010, and the extension made of test coverage applied to each quality characteristic. The outcome of PQEM is a single value representing the quality per each iteration of a product, as an aggregate measure. An illustrative example of the method was carried out with a web and mobile application, within the healthcare environment.

KEYWORDS

Quality Characteristics, Product Quality Measurement, Coverage, Quality Attributes.


Computer Simulation Programs for Kinematics of Mechanisms

Sheveleva Tatiana, Fellowship, Technical University, Omsk, Russia

ABSTRACT

This article provides programs and software packages that allow you to design the kinematics of mechanisms, as well as several examples show the use of these programs when creating geometric models.

KEYWORDS

Mechanism kinematics, CAD system, computer aided design system, Matlab, SolidWorks.


Corpus for Non-Functional Requirements

Maliha Sabir, Dr Ebad Banissi and Dr Mike Child, Department of Big Data and Informatics, London South Bank University, London, United Kingdom

ABSTRACT

State-of-the-art solutions to the classification of Non-functional requirements are mostly based on supervised machine learning models trained from manually annotated examples. Yet these techniques suffer from various limitations such as 1) Lack of theoretical representation for NFRs 2) Unavailability of a representative domain corpus. Our contribution lies in term of a representative domain corpus for NFRs called CUSTOM NFRs corpus based on a sample drawn from software quality models. It consists of five NFRs categories; Efficiency, Usability, Reliability, Portability and Maintainability, making a total of 1484 sentences. Further, we propose an iterative design to obtain gold standard multi-label corpus for NFRs based on web-based crowdsourcing platform (figure-eight). The procedure involved three annotators and results are calculated based on Cohen’s Kappa calculator. The analysis of the initial results shows a fair agreement. However, this study is limited to one iteration. The ultimate aim is to encourage future researchers to 1) train machine learning-based NLP system or discovery of rules for rule-based systems.2) evaluate the performance of NLP systems.

KEYWORDS

Software Requirements Engineering; Requirements Ontology; Non-functional Requirements, Requirement Corpus; Gold Standard Corpus, Crowdsourcing Annotations.


Automatic Extraction and Identification of Open Source Software License Terms

Zhiqiang Wang, Sheng Wu, Guoqiang Xiao and Zili Zhang, College of Computer Science and Technology, Southwest University, Chongqing, China

ABSTRACT

The tremendous achievements of open source software have changed the business model and had a profound impact on the open source industry and even society, which give rise to various open source licenses that regulates the use of open source software in a legal form. However, the wide variety of licenses makes it difficult for developers to properly understand the differences between licenses. To alleviate this problem, this research benefits from the prosperous development of machine learning and presents a natural language processing framework to obtain the topics and automatically identify terms in the license. Based on the hand-selected dimension of the license, we introduce a novel topic model that matches the license theme to its corresponding dimension. In experiment, we validate our model on public open source licenses we collected and labelled. We show our approach is an effective solution for understanding of licences.

KEYWORDS

Open Source Software License, License Terms, Topic Modelling, Latent Dirichlet Allocation.


A Review On Emerging Methods For Data Security In Cloud Computing

Srinidhi Kulkarni and Rishabh Kumar Tripathi, Department of Computer Science and Engineering, International Institute of Information Technology, Bhubaneswar, India

ABSTRACT

In this paper we did a literature study of the security algorithms that have been proposed to secure the Cloud computing platforms. The paper presents the potential threats, security issues of cloud computing platforms and the efficient research work carried out on these fields. The cryptography based security algorithms such as RSA, DES, AES, ECC and BLOWFISH have been discussed and the works relating to these algorithms were also studied and their results are presented. Some novel approaches in which Machine Learning frameworks were used to enforce the security of the cloud are also mentioned and discussed in detail. A comparative study of the security algorithms based on their performance on various impact factors of a system is also presented based on the research of the past. The discussion in this paper is a generalized discussion which is applicable to any service and any type of deployment of the cloud computing system. The paper aims to contribute to the domain knowledge of security and the different ways to enhance it.

KEYWORDS

Cloud computing, Security threats and breaches, Cryptography, Security Algorithms,Machine Learning, Quantum Cryptography.


Stack and Deal: An Efficient Algorithm for Privacy Preserving Data Publishing

Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA

ABSTRACT

Although k-Anonymity is a good way to publish microdata for research purposes, it still suffers from various attacks. Hence, many refinements of k-Anonymity have been proposed such as l-diversity and t-Closeness, with t-Closeness being one of the strictest privacy models. Satisfying t-Closeness for a lower value of t may yield equivalence classes with high number of records which results in a greater information loss. For a higher value of t, equivalence classes are still prone to homogeneity, skewness, and similarity attacks. This is because equivalence classes can be formed with fewer distinct sensitive attribute values and still satisfy the constraint t. In this paper, we introduce a new algorithm that overcomes the limitations of k -Anonymity and l-Diversity and yields equivalence classes of size k with greater diversity and frequency of a SA value in all the equivalence classes differ by at-most one.

KEYWORDS

k-Anonymity, l-Diversity, t-Closeness, Privacy Preserving Data Publishing.


Classification of high-resolution satellite images from urban areas Based Hybrid SVM and MIL

Magdy Shayboub Ali Mahmoud, Computer Science Dept., Faculty of Computers and Informatics, Ismailia, 41522, Suez Canal University, Egypt

ABSTRACT

Remote sensed image grading advanced considerably Takes into account the availability and abundance of various resolution image grading algorithms. A number of works were successful by the fusion of space-spectrum knowledge with supporting vector machines (SVM). In order to incorporate all these data with the composite approach, we suggest a technique using a hybrid multi-spectral and multi-instance procedure. This paper introduces a groundbreaking approach to exploring urban buildings through the implementation of the SVM-based support classification Multi-instance education (MIL). In this article we present the use of this model, the classification of images. Use high-resolution technology from Quickbird. This practice and archery have contributed to the performance, efficiency, and power. The suggested solution was tested in traditional urban imagery scenes. The results show a major improvement the classification performance compared to the two separately used attributes. The results of the experiments indicate a very promising accuracy of 91,24%.

KEYWORDS

Multi-Instance Learning (MIL), Support Vector Machines (SVM), Quickbird Satellite images, textural and spatial metrics.


Credit Card Fraud Detection using Supervised and Unsupervised Learning

Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA

ABSTRACT

In today’s economic scenario, credit card use has become common. These cards allow the user to make payments online and even in person. Online payments are very convenient, but it comes with its own risk of fraud. With the increasing number of users, credit card frauds are also increasing at the same pace. Some machine learning algorithms can be applied to tackle this problem. In this paper an evaluation of supervised and unsupervised machine learning algorithms has been presented for credit card fraud detection.

KEYWORDS

Credit card fraud detection, Supervised learning, Unsupervised learning.


Healthcare Analytics using Ensemble Learning

Abhishek Khare, Rahul Khanvilkar, Kunal Sondkar, Arathi Kamble, Department of Computer Engineering, NHITM, Thane, India

ABSTRACT

Data in Healthcare is a collection of record of patients, hospitals, doctors and medical treatment and it is growing so fast that this data is dif icult to maintain and analyze using some traditional data analytics methods. To overcome this problem, Machine learning techniques are applied on such big amount of data. To get better accuracy this paper proposes a Machine Learning approach known as Ensemble Learning which combines the results of three Machine Learning algorithms and Soft voting method is used for combining accuracies and results are evaluated using these accuracies.

KEYWORDS

Ensemble Learning, Soft Voting Method, Machine Learning.


Is Classical LSTM more Efficient than Modern GCN Approaches in the Context of Traffic Forecasting?

Haroun Bouchemoukha, Mohamed Nadjib Zennir and Atidel Lahoulou, 1Mohammed Seddik Benyahia University, Jijel, Algeria

ABSTRACT

Traffic forecasting is one of the most difficult tasks in the area of intelligent transportation systems (ITS) because of complex spatial correlations on road networks and non-linear temporal dynamics of changing road conditions. To address these issues, researchers proposed models that combine Graph Convolution Network (GCN) and Recurrent Neural Network RNN, in order to inherit the advantages of both of them and become capable to capture spatial-temporal dependencies. Restricting the efficiency of the models by their precision without concern for their structure made the models become more complex, although simple models sometimes produce better results. In this paper, we propose a simple model, called Long Short-Term Memory network for Traffic Forecasting (LSTM-TF), which uses the LSTM for extracting spatial-temporal dependencies. Experiments demonstrate that LSTM-TF model outperform state-of-the-art baselines on real-world traffic datasets, proving our hypothesis that simple models as the LSTM-TF produce sometimes better results than more complex ones.

KEYWORDS

Traffic forecasting, recurrent neural network (RNN), long short-term memory network (LSTM), spatial-temporal dependency, convolutional neural network (CNN) & graph convolution network (GCN).


menu
Reach Us

emailicaita@icaita2021.org


emailicaitaconf@yahoo.com

close