keyboard_arrow_up
Accepted Papers
Model-based Systems Engineering Approach with SYSML for an Automatic Flight Control System

Haluk Altay and M. Furkan Solmazgül, Teknopark Istanbul, Turkish Aerospace, Istanbul, Turkey

ABSTRACT

Systems engineering is the most important branch of engineering in interdisciplinary study. Successfully performing a multidisciplinary complex system is one of the most challenging tasks of systems engineering. Multidisciplinary study brings problems such as defining complex systems, ensuring communication between stakeholders, and common language among different design teams. In solving such problems, traditional systems engineering approach cannot provide an efficient solution. In this paper, a model-based systems engineering approach is applied with a case study and the approach isfound to be more efficient. In the case study, the design of the helicopter automatic flight control system was realized by applying model-based design processes with integration of tools. Requirement management, system architecture management and model-based systems engineering processes are explained and applied of the case study. Finally, model-based systems engineering approach is proven to be effective compared with the traditional systems engineering methods for complex systems in aviation and defence industries.

KEYWORDS

Model-Based Systems Engineering, Automatic Flight Control System, SysML


Research on Traffic Data Recovery Based on Tensor Filling and Tensor Matrix Association Analysis

Shengbao Yang, Lianjie Li and SanfengZhang, Yunnan University, kunming, China

ABSTRACT

Traffic data is the data foundation for smart transportation construction. However, due to inclement weather and equipment damage, there are often data missing during the collection of traffic data, which severely restricts smart transportation construction progress. Therefore, traffic data recovery has become an urgent problem in the field of intelligent transportation. This paper proposes a technology based on Tensor Completion and Coupled Matrix and Tensor Factorizations (CMTF); aiming at the problems in traffic data recovery, it focuses on the study of traffic data recovery models suitable for extreme data loss situations. Aiming at the problem that the recovery accuracy of existing traffic data recovery methods declines sharply under extreme missing conditions, this paper proposes a traffic data recovery model based on multi-source data association analysis, combined with real taxi GPS positioning data and point of interesting (POI) data verified. The experimental results show that the traffic data recovery model proposed in this paper can significantly improve the recovery accuracy of missing data and maintain good stability in the case of extreme data missing.


Cloud-based Privacy Preserving Top-k Subgraph Querying on Large Graphs

Jianwen Zhao and Ada Wai-chee Fu, Department of Computer Science and Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong

ABSTRACT

Subgraph isomorphism search is an important problem in graph data management. Due to its computational hardness, recent studies consider cloud computing while incorporating data anonymization for privacy protection. The state-of-the-art solution provides a framework but targets at the enumeration of all the querying results, which can be prohibitively expensive. In this work, we study the problem of privacy-preserving cloud-based diversified top-k subgraph querying, that is, given a query graph Q and a number k, we aim to retrieve k isomorphic subgraphs from the given data graph G such that together they cover as many distinct vertices as possible. We show that the state-of-the-art solution cannot address top-k query and we propose (1) a new graph anonymization technique equipped with a novel densest-block based vertex mapping method and a simple and effective label generalization method; (2) an iterative querying method that involves low communication overhead. Our extensive experiments on real-life datasets verify the efficiency and the effectiveness of the proposed methods, which significantly outperform the baselines.

KEYWORDS

Privacy Preservation, Subgraph Isomorphism, Top-k Querying, Results Diversity.


Secure Protocol for four D2D scenarios

Hoda Nematy, Malek-Ashtar University of Technology, Shabanlou, Babaee Hwy, Lavizan, Tehran, Iran

ABSTRACT

D2D is a new form of communication for reducing cellular traffic and increasing the efficiency of the cellular network. This form of communication has introduced for 4th cellular communication and certainly has a big role in the 5th generation. Four D2D communication scenarios defined in references, Direct D2D and relaying D2D communication both with and without cellular infrastructure. One of the major challenges for D2D protocols is to have one single secure protocol that can adapt to four scenarios. In this paper, WE propose a Secure D2D protocol based on ARIADNE with TESLA and LTE-A AKA protocol. WE use LTE-A AKA protocol for authentication and key agreement betwe en Source and Destination and use TESLA for broadcast authentication betwe en relaying nodes. Based on the results, our proposed protocols have a few computation overheads compare to recent works and have less communication overhead than SODE with preserve many security properties such as Authentication, Authorization, Confidentiality, Integrity, Secure Key Agreement, Secure Routing Transmission…. WE check Authentication, Confidentiality, Reachability and Secure Key Agreement of the proposed protocol with ProVerif verification tools.

KEYWORDS

5th generation, Four D2D scenarios, LTE-A AKA protocol, secure D2D protocol, ProVerif.


PQEM: Product Quality Evaluation Method

Mariana Falco1 and Gabriela Robiolo2, 1LIDTUA/CONICET, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina, 2LIDTUA, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina

ABSTRACT

Project managers and leaders need to view and understand the entire picture of the development process; and also, comprehend the product quality level, in a synthetic and intuitive way to facilitate the decision of accepting or rejecting each iteration within the software life cycle. This article presents a novel solution called Product Quality Evaluation Method (PQEM) to evaluate the quality characteristics of each iteration from a software product, using the Goal-Question-Metric approach, the ISO/IEC 25010, and the extension made of test coverage applied to each quality characteristic. The outcome of PQEM is a single value representing the quality per each iteration of a product, as an aggregate measure. An illustrative example of the method was carried out with a web and mobile application, within the healthcare environment.

KEYWORDS

Quality Characteristics, Product Quality Measurement, Coverage, Quality Attributes.


Computer Simulation Programs for Kinematics of Mechanisms

Sheveleva Tatiana, Fellowship, Technical University, Omsk, Russia

ABSTRACT

This article provides programs and software packages that allow you to design the kinematics of mechanisms, as well as several examples show the use of these programs when creating geometric models.

KEYWORDS

Mechanism kinematics, CAD system, computer aided design system, Matlab, SolidWorks.


Corpus for Non-Functional Requirements

Maliha Sabir, Dr Ebad Banissi and Dr Mike Child, Department of Big Data and Informatics, London South Bank University, London, United Kingdom

ABSTRACT

State-of-the-art solutions to the classification of Non-functional requirements are mostly based on supervised machine learning models trained from manually annotated examples. Yet these techniques suffer from various limitations such as 1) Lack of theoretical representation for NFRs 2) Unavailability of a representative domain corpus. Our contribution lies in term of a representative domain corpus for NFRs called CUSTOM NFRs corpus based on a sample drawn from software quality models. It consists of five NFRs categories; Efficiency, Usability, Reliability, Portability and Maintainability, making a total of 1484 sentences. Further, we propose an iterative design to obtain gold standard multi-label corpus for NFRs based on web-based crowdsourcing platform (figure-eight). The procedure involved three annotators and results are calculated based on Cohen’s Kappa calculator. The analysis of the initial results shows a fair agreement. However, this study is limited to one iteration. The ultimate aim is to encourage future researchers to 1) train machine learning-based NLP system or discovery of rules for rule-based systems.2) evaluate the performance of NLP systems.

KEYWORDS

Software Requirements Engineering; Requirements Ontology; Non-functional Requirements, Requirement Corpus; Gold Standard Corpus, Crowdsourcing Annotations.


Automatic Extraction and Identification of Open Source Software License Terms

Zhiqiang Wang, Sheng Wu, Guoqiang Xiao and Zili Zhang, College of Computer Science and Technology, Southwest University, Chongqing, China

ABSTRACT

The tremendous achievements of open source software have changed the business model and had a profound impact on the open source industry and even society, which give rise to various open source licenses that regulates the use of open source software in a legal form. However, the wide variety of licenses makes it difficult for developers to properly understand the differences between licenses. To alleviate this problem, this research benefits from the prosperous development of machine learning and presents a natural language processing framework to obtain the topics and automatically identify terms in the license. Based on the hand-selected dimension of the license, we introduce a novel topic model that matches the license theme to its corresponding dimension. In experiment, we validate our model on public open source licenses we collected and labelled. We show our approach is an effective solution for understanding of licences.

KEYWORDS

Open Source Software License, License Terms, Topic Modelling, Latent Dirichlet Allocation.


Stack and Deal: An Efficient Algorithm for Privacy Preserving Data Publishing

Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA

ABSTRACT

Although k-Anonymity is a good way to publish microdata for research purposes, it still suffers from various attacks. Hence, many refinements of k-Anonymity have been proposed such as l-diversity and t-Closeness, with t-Closeness being one of the strictest privacy models. Satisfying t-Closeness for a lower value of t may yield equivalence classes with high number of records which results in a greater information loss. For a higher value of t, equivalence classes are still prone to homogeneity, skewness, and similarity attacks. This is because equivalence classes can be formed with fewer distinct sensitive attribute values and still satisfy the constraint t. In this paper, we introduce a new algorithm that overcomes the limitations of k -Anonymity and l-Diversity and yields equivalence classes of size k with greater diversity and frequency of a SA value in all the equivalence classes differ by at-most one.

KEYWORDS

k-Anonymity, l-Diversity, t-Closeness, Privacy Preserving Data Publishing.


Classification of high-resolution satellite images from urban areas Based Hybrid SVM and MIL

Magdy Shayboub Ali Mahmoud, Computer Science Dept., Faculty of Computers and Informatics, Ismailia, 41522, Suez Canal University, Egypt

ABSTRACT

Remote sensed image grading advanced considerably Takes into account the availability and abundance of various resolution image grading algorithms. A number of works were successful by the fusion of space-spectrum knowledge with supporting vector machines (SVM). In order to incorporate all these data with the composite approach, we suggest a technique using a hybrid multi-spectral and multi-instance procedure. This paper introduces a groundbreaking approach to exploring urban buildings through the implementation of the SVM-based support classification Multi-instance education (MIL). In this article we present the use of this model, the classification of images. Use high-resolution technology from Quickbird. This practice and archery have contributed to the performance, efficiency, and power. The suggested solution was tested in traditional urban imagery scenes. The results show a major improvement the classification performance compared to the two separately used attributes. The results of the experiments indicate a very promising accuracy of 91,24%.

KEYWORDS

Multi-Instance Learning (MIL), Support Vector Machines (SVM), Quickbird Satellite images, textural and spatial metrics.


Credit Card Fraud Detection using Supervised and Unsupervised Learning

Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA

ABSTRACT

In today’s economic scenario, credit card use has become common. These cards allow the user to make payments online and even in person. Online payments are very convenient, but it comes with its own risk of fraud. With the increasing number of users, credit card frauds are also increasing at the same pace. Some machine learning algorithms can be applied to tackle this problem. In this paper an evaluation of supervised and unsupervised machine learning algorithms has been presented for credit card fraud detection.

KEYWORDS

Credit card fraud detection, Supervised learning, Unsupervised learning.


menu
Reach Us

emailmlt@icaita2021.org


emailmltconf@yahoo.com

close