Haluk Altay and M. Furkan Solmazgül, Teknopark Istanbul, Turkish Aerospace, Istanbul, Turkey
Systems engineering is the most important branch of engineering in interdisciplinary study. Successfully performing a multidisciplinary complex system is one of the most challenging tasks of systems engineering. Multidisciplinary study brings problems such as defining complex systems, ensuring communication between stakeholders, and common language among different design teams. In solving such problems, traditional systems engineering approach cannot provide an efficient solution. In this paper, a model-based systems engineering approach is applied with a case study and the approach isfound to be more efficient. In the case study, the design of the helicopter automatic flight control system was realized by applying model-based design processes with integration of tools. Requirement management, system architecture management and model-based systems engineering processes are explained and applied of the case study. Finally, model-based systems engineering approach is proven to be effective compared with the traditional systems engineering methods for complex systems in aviation and defence industries.
Model-Based Systems Engineering, Automatic Flight Control System, SysML
Shengbao Yang, Lianjie Li and SanfengZhang, Yunnan University, kunming, China
Traffic data is the data foundation for smart transportation construction. However, due to inclement weather and equipment damage, there are often data missing during the collection of traffic data, which severely restricts smart transportation construction progress. Therefore, traffic data recovery has become an urgent problem in the field of intelligent transportation. This paper proposes a technology based on Tensor Completion and Coupled Matrix and Tensor Factorizations (CMTF); aiming at the problems in traffic data recovery, it focuses on the study of traffic data recovery models suitable for extreme data loss situations. Aiming at the problem that the recovery accuracy of existing traffic data recovery methods declines sharply under extreme missing conditions, this paper proposes a traffic data recovery model based on multi-source data association analysis, combined with real taxi GPS positioning data and point of interesting (POI) data verified. The experimental results show that the traffic data recovery model proposed in this paper can significantly improve the recovery accuracy of missing data and maintain good stability in the case of extreme data missing.
Jianwen Zhao and Ada Wai-chee Fu, Department of Computer Science and Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong
Subgraph isomorphism search is an important problem in graph data management. Due to its computational hardness, recent studies consider cloud computing while incorporating data anonymization for privacy protection. The state-of-the-art solution provides a framework but targets at the enumeration of all the querying results, which can be prohibitively expensive. In this work, we study the problem of privacy-preserving cloud-based diversified top-k subgraph querying, that is, given a query graph Q and a number k, we aim to retrieve k isomorphic subgraphs from the given data graph G such that together they cover as many distinct vertices as possible. We show that the state-of-the-art solution cannot address top-k query and we propose (1) a new graph anonymization technique equipped with a novel densest-block based vertex mapping method and a simple and effective label generalization method; (2) an iterative querying method that involves low communication overhead. Our extensive experiments on real-life datasets verify the efficiency and the effectiveness of the proposed methods, which significantly outperform the baselines.
Privacy Preservation, Subgraph Isomorphism, Top-k Querying, Results Diversity.
Md Kamrul Islam, Sabeur Aridhi and Malika Smail-Tabbone, Universite de Lorraine, CNRS, Inria, LORIA, 54000 Nancy, France
The task of inferring missing links or predicting future ones in a graph based on its current structure is referred to as link prediction. Link prediction methods that are based on pairwise node similarity are well-established approaches in the literature and show good prediction performance in many real-world graphs though they are heuristic. On the other hand, graph embedding approaches learn low-dimensional representation of nodes in graph and are capable of capturing inherent graph features, and thus support the subsequent link prediction task in graph. This appraisal paper studies a selection of methods from both categories on several benchmark (homogeneous) graphs with different properties from various domains. Beyond the intra and inter category comparison of the performances of the methods our aim is also to uncover interesting connections between GNN-based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
Link Prediction, Graph Neural Network, Homogeneous Graph & Node Embedding.
Ms. Muskan Rathore, Mr. Shubham Kumar and Mr. Shashwat Vaibhav, Department of Computer Science and Engineering,Indian Institute of Technology, Kanpur, India
This paper discusses how Data Mining can be extremely useful for a country’s national security. It discusses about terrorist attacks patterns in India and how statistically, the only phase when the security agencies can detect a terrorist attack and take action is only during the planning phase of a terrorist attacks. Targeting the planning phase, we have designed an algorithm that detects planning phase meetings in hotels and calls forming terrorist networks by mining data from hotel booking records and CDRs respectively and alert the designated authorities to take action on the same.
Terrorist Attacks, National Security, Terrorist Organizations, CDR, Calls, Hotel meeting, Terrorist Network, Hijack, India, Terrorist detection.
Hoda Nematy, Malek-Ashtar University of Technology, Shabanlou, Babaee Hwy, Lavizan, Tehran, Iran
D2D is a new form of communication for reducing cellular traffic and increasing the efficiency of the cellular network. This form of communication has introduced for 4th cellular communication and certainly has a big role in the 5th generation. Four D2D communication scenarios defined in references, Direct D2D and relaying D2D communication both with and without cellular infrastructure. One of the major challenges for D2D protocols is to have one single secure protocol that can adapt to four scenarios. In this paper, WE propose a Secure D2D protocol based on ARIADNE with TESLA and LTE-A AKA protocol. WE use LTE-A AKA protocol for authentication and key agreement betwe en Source and Destination and use TESLA for broadcast authentication betwe en relaying nodes. Based on the results, our proposed protocols have a few computation overheads compare to recent works and have less communication overhead than SODE with preserve many security properties such as Authentication, Authorization, Confidentiality, Integrity, Secure Key Agreement, Secure Routing Transmission…. WE check Authentication, Confidentiality, Reachability and Secure Key Agreement of the proposed protocol with ProVerif verification tools.
5th generation, Four D2D scenarios, LTE-A AKA protocol, secure D2D protocol, ProVerif.
Grace Olaleru, Henry Ohize, Abubakar Saddiq Mohammed, Department of Electrical and Electronics Engineering, Federal University of Technology, Minna, Nigeria
The primary user emulation attack (PUEA) is one of the most common attacks faced by the Cognitive Radio Networks (CRNs). In this attack, a malicious user transmits a signal similar with the real primary user’s (PU) signal to cause the legitimate secondary users (SUs) to leave the available channel while the PU is absent hence, detecting this attacker is vital in building a real CRN. In this paper, the PUEA is detected based on the Time Difference of Arrival localization technique using a Modified Particle Swarm Optimization algorithm. This technique is capable of efficiently detecting the PUEA when located anywhere within the CRN. The performance of the developed technique was evaluated using the Mean Square Error and cumulative distribution frequency and the results were compared with the Standard PSO via simulation in MATLAB. Simulation results showed that the MPSO performed better than the SPSO.
Cognitive Radio Network, Primary User Emulation Attack, Time Difference of Arrival, Modified Particle Swarm Optimization, Standard Particle Swarm Optimization.
Samah Mohammed S ALhusayni and Wael Ali Alosaimi, Taif university, Saudi Arabia
Internet of Things (IoT) has a huge attention recently due to its new emergence, benefits, and contribution to improving the quality of human lives. Securing IoT poses an open area of research, as it is the base of allowing people to use the technology and embrace this development in their daily activities. Authentication is one of the influencing security element of Information Assurance (IA), which includes confidentiality, integrity, and availability, nonrepudiation, and authentication. Therefore, there is a need to enhance security in the current authentication mechanisms. In this report, some of the authentication mechanisms proposed in recent years have been presented and reviewed. Specifically, the study focuses on enhancement of security in CoAP protocol due to its relevance to the characteristics of IoT devices and its need to enhance its security by using the symmetric key with biometric features in the authentication. This study will help in providing secure authentication technology for IoT data, device, and users.
Authentication, authorization, key agreement, anonymity, traceability, Security, Cybersecurity, Secure by Design, Next Generation Internet, Smart City, wireless sensor networks, 5G network, the Internet of Things, CoAP, symmetric key and biometric.
Mariana Falco1 and Gabriela Robiolo2, 1LIDTUA/CONICET, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina, 2LIDTUA, Engineering School, Universidad Austral, Pilar, Buenos Aires, Argentina
Project managers and leaders need to view and understand the entire picture of the development process; and also, comprehend the product quality level, in a synthetic and intuitive way to facilitate the decision of accepting or rejecting each iteration within the software life cycle. This article presents a novel solution called Product Quality Evaluation Method (PQEM) to evaluate the quality characteristics of each iteration from a software product, using the Goal-Question-Metric approach, the ISO/IEC 25010, and the extension made of test coverage applied to each quality characteristic. The outcome of PQEM is a single value representing the quality per each iteration of a product, as an aggregate measure. An illustrative example of the method was carried out with a web and mobile application, within the healthcare environment.
Quality Characteristics, Product Quality Measurement, Coverage, Quality Attributes.
Sheveleva Tatiana, Fellowship, Technical University, Omsk, Russia
This article provides programs and software packages that allow you to design the kinematics of mechanisms, as well as several examples show the use of these programs when creating geometric models.
Mechanism kinematics, CAD system, computer aided design system, Matlab, SolidWorks.
Maliha Sabir, Dr Ebad Banissi and Dr Mike Child, Department of Big Data and Informatics, London South Bank University, London, United Kingdom
State-of-the-art solutions to the classification of Non-functional requirements are mostly based on supervised machine learning models trained from manually annotated examples. Yet these techniques suffer from various limitations such as 1) Lack of theoretical representation for NFRs 2) Unavailability of a representative domain corpus. Our contribution lies in term of a representative domain corpus for NFRs called CUSTOM NFRs corpus based on a sample drawn from software quality models. It consists of five NFRs categories; Efficiency, Usability, Reliability, Portability and Maintainability, making a total of 1484 sentences. Further, we propose an iterative design to obtain gold standard multi-label corpus for NFRs based on web-based crowdsourcing platform (figure-eight). The procedure involved three annotators and results are calculated based on Cohen’s Kappa calculator. The analysis of the initial results shows a fair agreement. However, this study is limited to one iteration. The ultimate aim is to encourage future researchers to 1) train machine learning-based NLP system or discovery of rules for rule-based systems.2) evaluate the performance of NLP systems.
Software Requirements Engineering; Requirements Ontology; Non-functional Requirements, Requirement Corpus; Gold Standard Corpus, Crowdsourcing Annotations.
Zhiqiang Wang, Sheng Wu, Guoqiang Xiao and Zili Zhang, College of Computer Science and Technology, Southwest University, Chongqing, China
The tremendous achievements of open source software have changed the business model and had a profound impact on the open source industry and even society, which give rise to various open source licenses that regulates the use of open source software in a legal form. However, the wide variety of licenses makes it difficult for developers to properly understand the differences between licenses. To alleviate this problem, this research benefits from the prosperous development of machine learning and presents a natural language processing framework to obtain the topics and automatically identify terms in the license. Based on the hand-selected dimension of the license, we introduce a novel topic model that matches the license theme to its corresponding dimension. In experiment, we validate our model on public open source licenses we collected and labelled. We show our approach is an effective solution for understanding of licences.
Open Source Software License, License Terms, Topic Modelling, Latent Dirichlet Allocation.
Srinidhi Kulkarni and Rishabh Kumar Tripathi, Department of Computer Science and Engineering, International Institute of Information Technology, Bhubaneswar, India
In this paper we did a literature study of the security algorithms that have been proposed to secure the Cloud computing platforms. The paper presents the potential threats, security issues of cloud computing platforms and the efficient research work carried out on these fields. The cryptography based security algorithms such as RSA, DES, AES, ECC and BLOWFISH have been discussed and the works relating to these algorithms were also studied and their results are presented. Some novel approaches in which Machine Learning frameworks were used to enforce the security of the cloud are also mentioned and discussed in detail. A comparative study of the security algorithms based on their performance on various impact factors of a system is also presented based on the research of the past. The discussion in this paper is a generalized discussion which is applicable to any service and any type of deployment of the cloud computing system. The paper aims to contribute to the domain knowledge of security and the different ways to enhance it.
Cloud computing, Security threats and breaches, Cryptography, Security Algorithms,Machine Learning, Quantum Cryptography.
Mohammed Hamoumi, Abdellah Haddout and Mariam Benhadou, Laboratory of industrial management, energy and technology of plastic materials and composites. Hassan II University- ENSEM Casablanca, Morocco
Based on the principle that perfection is a divine criterion, process management exists on the one hand to achieve excellence (near perfection) and on the other hand to avoid imperfection. In other words, Operational Excellence (EO) is one of the approaches, when used rigorously, aims to maximize performance. Therefore, the mastery of problem solving remains necessary to achieve such performance level. There are many tools that we can use whether in continuous improvement for the resolution of chronic problems (KAIZEN, DMAIC, Lean six sigma…) or in resolution of sporadic defects (8D, PDCA, QRQC ...). However, these methodologies often use the same basic tools (Ishikawa diagram, 5 why, tree of causes…) to identify potential causes and root causes. This results in three levels of causes: occurrence, no detection and system. The research presents the development of DINNA diagram  as an effective and efficient process that links the Ishikawa diagram and the 5 why method to identify the root causes and avoid recurrence. The ultimate objective is to achieve the same result if two working groups with similar skills analyze the same problem separately, to achieve this, the consistent application of a robust methodology is required. Therefore, we are talking about 5 dimensions; occurrence, non-detection, system, effectiveness and efficiency. As such, the paper offers a solution that is both effective and efficient to help practitioners of industrial problem solving avoid missing the real root cause and save costs following a wrong decision.
Operational Excellence, DINNA Diagram, Double Ishikawa and Naze Naze Analysis, Ishikawa, 5 Way analysis, Morocco.
Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA
Although k-Anonymity is a good way to publish microdata for research purposes, it still suffers from various attacks. Hence, many refinements of k-Anonymity have been proposed such as l-diversity and t-Closeness, with t-Closeness being one of the strictest privacy models. Satisfying t-Closeness for a lower value of t may yield equivalence classes with high number of records which results in a greater information loss. For a higher value of t, equivalence classes are still prone to homogeneity, skewness, and similarity attacks. This is because equivalence classes can be formed with fewer distinct sensitive attribute values and still satisfy the constraint t. In this paper, we introduce a new algorithm that overcomes the limitations of k -Anonymity and l-Diversity and yields equivalence classes of size k with greater diversity and frequency of a SA value in all the equivalence classes differ by at-most one.
k-Anonymity, l-Diversity, t-Closeness, Privacy Preserving Data Publishing.
Yassine MOUMEN, Mariam BENHADOU and Abdellah HADDOUT, Department of Industrial Engineering, ENSEM Hassan II University, Casablanca, Morocco
During the course of industry 4.0 era, companies have been exponentially developed and have digitalized almost the whole business system to stick to their performance targets and to keep or to even enlarge their market share. Maintenance function has followed obviously the trend as it’s considered one of the most important processes in every enterprise as it impacts a group of the most critical performance indicators such as: cost, reliability, availability, safety and productivity. E-maintenance emerged since early 2000 and now is a common term in maintenance literature represents the digitalized side of maintenance whereby assets are monitored and controlled over internet. According to literature e-maintenance has a remarkable impact on maintenance KPIs and aims to ambitious objectives like zero-downtime.
E-maintenance, Maintenance, industry 4.0, industrial performance, zero-downtime.
Parhat Abla, State Key Laboratory of Information Security, Institute of Information Engineering, CAS 1, Beijing 100093, China & School of Cyber Security, UCAS2 Beijing 100093, China
Group key exchange schemes allow group members to agree on a session key. Although there are many works on constructing group key exchange schemes, but most of them are based on algebraic problems which can be solved by quantum algorithms in polynomial time. Even if several works considered lattice based group key exchange schemes, believed to be post-quantum secure, but only in the random oracle model. In this work, we propose a group key exchange scheme based on ring learning with errors problem. On contrast to existing schemes, our scheme is proved to be secure in the standard model. To achieve this, we define and instantiate multi-party key reconciliation mechanism. Furthermore, using known compiler with lattice based signature schemes, we can achieve authenticated group key exchange with post-quantum security.
Group key exchange; Lattice; Ring LWE.
Magdy Shayboub Ali Mahmoud, Computer Science Dept., Faculty of Computers and Informatics, Ismailia, 41522, Suez Canal University, Egypt
Remote sensed image grading advanced considerably Takes into account the availability and abundance of various resolution image grading algorithms. A number of works were successful by the fusion of space-spectrum knowledge with supporting vector machines (SVM). In order to incorporate all these data with the composite approach, we suggest a technique using a hybrid multi-spectral and multi-instance procedure. This paper introduces a groundbreaking approach to exploring urban buildings through the implementation of the SVM-based support classification Multi-instance education (MIL). In this article we present the use of this model, the classification of images. Use high-resolution technology from Quickbird. This practice and archery have contributed to the performance, efficiency, and power. The suggested solution was tested in traditional urban imagery scenes. The results show a major improvement the classification performance compared to the two separately used attributes. The results of the experiments indicate a very promising accuracy of 91,24%.
Multi-Instance Learning (MIL), Support Vector Machines (SVM), Quickbird Satellite images, textural and spatial metrics.
Vikas Thammanna Gowda, Department of Electrical Engineering and Computer Science, Wichita State University, Kansas, USA
In today’s economic scenario, credit card use has become common. These cards allow the user to make payments online and even in person. Online payments are very convenient, but it comes with its own risk of fraud. With the increasing number of users, credit card frauds are also increasing at the same pace. Some machine learning algorithms can be applied to tackle this problem. In this paper an evaluation of supervised and unsupervised machine learning algorithms has been presented for credit card fraud detection.
Credit card fraud detection, Supervised learning, Unsupervised learning.
Abhishek Khare, Rahul Khanvilkar, Kunal Sondkar, Arathi Kamble, Department of Computer Engineering, NHITM, Thane, India
Data in Healthcare is a collection of record of patients, hospitals, doctors and medical treatment and it is growing so fast that this data is dif icult to maintain and analyze using some traditional data analytics methods. To overcome this problem, Machine learning techniques are applied on such big amount of data. To get better accuracy this paper proposes a Machine Learning approach known as Ensemble Learning which combines the results of three Machine Learning algorithms and Soft voting method is used for combining accuracies and results are evaluated using these accuracies.
Ensemble Learning, Soft Voting Method, Machine Learning.
Smriti Keny, Sagarika Nair, Silka Nandi and Deepak Khachane
Analysing and predicting sales data of a pharmaceutical industry is quite essential, so as to obtain good estimates of medicine needs, thereby preventing costs of excessive inventory and loss of customers due to the shortage of drugs. In Pharmaceutical Distribution Companies, it is highly important to obtain an approximation of the medicine needs, due to the short shelf life of many medicines and the need to control stock levels. Proper planning will help us avoid excessive inventory costs while guaranteeing customer demand satisfaction. This makes it a need of the hour to combine a procedure for in-market product demand forecasting and purchase order generation for the pharmaceutical supply chain. Taking into consideration, the past sales records, planning the inventory accordingly will help us reduce the risk of wastage of resources. In this project, we explore the use of support vector regression algorithm for the sales prediction of individual products. The project also helps visualize and analyze the sales data for better understanding of the trends and seasonality. The results obtained with the algorithm as well as with the proposed method for visualization and analysis, suggest that the proposed model may be considered appropriate for the product sales prediction.
PDCs, SVR, Inventory management, Sales prediction, Data Analysis, Data Visualization.
Quoc-Huy Trinh and Minh-Van Nguyen, Department of Computer Engineering, Ho Chi Minh University of Science, Ho Chi Minh City, Vietnam
We propose a method that Fine tuning a combination of backbone DenseNet and ResNet to classify 8 classes showing anatomical landmarks, phathological findings, to endoscopic procedures in the GI tract. Our Technique depends on Transfer Learning which combines 2 backbones are DenseNet 121 and ResNet 101 to improve the performance of Feature Extraction for classifying the target class. After experiment and evaluating our work, we get accuracy with F1 score approximately 0.93 while training 80000 images and test on 4000 images.
Kvasir dataset, dense-res, medical image, classification, deep neural network.
Michel Andre L .Vinagreiro1 and Edson C. Kitani2 and Armando Antonio M. Lagana3 and Leopoldo R. Yoshioka4, 1Laboratory of Integrated Systems, Escola Politecnica da Universidade de Sao Paulo, Sao Paulo, Sao Paulo, Brasil, 2Department of Automotive Electronics, Fatec Santo Andre, Santo Andre, Sao Paulo, Brasil, 3Laboratory of Integrated Systems, Escola Politecnica da Universidade de Sao Paulo, Sao Paulo, Sao Paulo, Brasil, 4Laboratory of Integrated Systems, Escola Politecnica da Universidade de Sao Paulo, Sao Paulo, Sao Paulo, Brasil
Computer vision plays a crucial role in ADAS security and navigation, as most systems are based on deep CNN architectures the computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on acceleration techniques are found in the literature; satisfactory results for embedded real-time system applications have not yet been achieved. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. When the training process ends, the generated maps are used to create vector feature space. We use this new vector space to make projections of any new sample in order to classify them. Our method, named MFS-CNN, uses the transfer learning from pre trained CNN to reduces the classification time of new sample image, with minimal loss in accuracy. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark we compared the classification time of original VGG-16 model with the MFS-CNN method, and our method is, on average, 17 times faster. The fast classification time reduces the computational and memories demand in embedded applications that requires a large CNN architecture.
Convolutional Neural Networks, Deep Learning Acceleration, Advanced Driver Assistance Systems.
Diego P. Pinto-Roa1,4*, Hernán Medina4, Federico Román4, Miguel García-Torres1,2, Federico Divina1, Francisco Gómez-Vela1, Félix Morales1, Gustavo Velázquez1, Federico Daumas1, José L. Vázquez-Noguera1, Carlos Sauer Ayala3 and Pedro E.Gardel-Sotomayor5, 1Computer Engineer Department, Universidad Americana, Paraguay, 2Division of Computer Sience, Universidad Pablo de la Ovide, Seville, Spain, 3Facultad de Ingeniería, Universidad Nacional de Asunción, Paraguay, 4Facultad Politécnica, Universidad Nacional de Asunción, Paraguay, 5Universidad Católica de Asunción, Ciudad del Este, Paraguay
The discovery and description of patterns in electric energy consumption time series is fundamental for timely management of the system. A bicluster describes a subset of observation points in a time period in which a consumption pattern occurs as abrupt changes or instabilities homogeneously. Nevertheless, the pattern detection complexity increases with the number of observation points and samples of the study period. In this context, current bi-clustering techniques may not detect significant patterns given the increased search space. This study develops a parallel evolutionary computation scheme to find biclusters in electric energy. Numerical simulations show the benefits of the proposed approach, discovering significantly more electricity consumption patterns compared to a state-of-the-art non-parallel competitive algorithm.
Biclustering, Big data, Electric energy consumption, Parallel evolutionary computation.
Haroun Bouchemoukha, Mohamed Nadjib Zennir and Atidel Lahoulou, 1Mohammed Seddik Benyahia University, Jijel, Algeria
Traffic forecasting is one of the most difficult tasks in the area of intelligent transportation systems (ITS) because of complex spatial correlations on road networks and non-linear temporal dynamics of changing road conditions. To address these issues, researchers proposed models that combine Graph Convolution Network (GCN) and Recurrent Neural Network RNN, in order to inherit the advantages of both of them and become capable to capture spatial-temporal dependencies. Restricting the efficiency of the models by their precision without concern for their structure made the models become more complex, although simple models sometimes produce better results. In this paper, we propose a simple model, called Long Short-Term Memory network for Traffic Forecasting (LSTM-TF), which uses the LSTM for extracting spatial-temporal dependencies. Experiments demonstrate that LSTM-TF model outperform state-of-the-art baselines on real-world traffic datasets, proving our hypothesis that simple models as the LSTM-TF produce sometimes better results than more complex ones.
Traffic forecasting, recurrent neural network (RNN), long short-term memory network (LSTM), spatial-temporal dependency, convolutional neural network (CNN) & graph convolution network (GCN).
Coudray Théotime, ART-Dev laboratory, University of Montpellier, France and Climate Economics Chair, Paris, France
Variable and Renewable Energies (VREs) are growing fast among power systems all around the world. An important issue emerging along this fast development is to which extent a power system can cope with rapid changes either in load or generation. In this paper, we investigate the potential use of a hybrid forecasting framework to predict a given power system Residual Load (RL) curve at different timescales. Our forecasting framework relies on the decomposition of RL time-series into oscillatory patterns and the training of a Recurrent Neural Network specifically designed for this task. Using this methodology, we forecast every day of French region Occitanie power system hourly RL for the full year 2019 with an average MAPE score of 3.02%. This empirical application illustrates the consistent accuracy of forecasts given by our methodology over time, making it very suitable for daily practical use among energy stakeholders.
Power system, Time-series forecasting, Empirical Mode Decomposition, Artificial Neural Network, Smart-grid.
Shravan Aruljothi, Sharad Parate, Nehal Dinesh Andani and Harshit Shivakumar, Department of Mechanical Engineering, New Horizon College of Engineering, Bangalore, India
The major problem in every urban city is the lack of security to residential areas. The number of thefts, electricity and food wastage at homes in urban areas increase every year due to human error. As per the National Crime Records Bureau (NCRB), 2,44,119 cases of robbery, theft, and burglary took place in residential premises in 2019. Also, electricity consumption in Indian homes has tripled since 2010. In 2019, an urban Indian household consumed about 90 units (kWh) of electricity as a monthly average which is one-third of the monthly world average. To solve these issues, we have proposed an idea of a “Home Security Robot” for a smart city using AI. The Home Security Robot will help in eliminating the reliance on security guards and will effectively monitor everything in the house (if there are any gas leakage, fridge malfunctions, unnecessary electricity wastage, indoor air quality and any unknown movements inside the house). If the owner is under attack, he/she can shout out “HELP” or “SAVE ME "so that the robot can take in the voice command to automatically call the police. The navigation part is done by Arduino and Bluetooth RC Controller App. There are 2 parts (Face Detection & Recognition using Raspberry Pi and IoT system using BOLT module with sensors). The first part has three python programs used for facial detection and recognition using OpenCV with Haarcascade Classifier and LBPH algorithm. The first program (Face Dataset) is used for collecting images of known users and storing it in a database using Haar Cascade Classifier. The second program (Face Training) is used to train the stored images using LBPH algorithm so the model can distinguish between the users whose faces are stored in database and then these trained images are stored in the trainer.yml file. The third program (Face Recognition) is used to read the trained images stored in the trainer.yml file and then uses Haar Cascade Classifier to recognize the detected face and identifies whether the face belongs to a user or an intruder. The IoT system with the help of BOLT module helps in checking the temperature in the room and checking if any unnecessary lights are on in the room. If room temperature is outside the safe range specified or if any lights are on, owner will get an alert via SMS.
IoT, Deep Learning, Machine Learning, Raspberry Pi, BOLT, Computer Vision, OpenCV, Haarcascade Classifier, Viola Jones algorithm, LBPH, Raspberry Pi.