Wednesday, 24 February 2021 04:28

A Novel Recurrent Neural Networks

Architecture for Behavior Analysis

Neziha Jaouedi1, Noureddine Boujnah2, and Mohamed Bouhlel3

1Electrical Engineering Department, Gabes university, Tunisia

1,3SETIT Lab, Tunisia

2Faculté des sciences de Gabes, Tunisia

Abstract: Behavior analysis is an important yet challenging task on computer vision area. However, human behavior is still a necessity in differents sectors. In fact, in the increase of crimes, everyone needs video surveillance to keep their belongings safe and to automatically detect events by collecting important information for the assistance of security guards. Moreover, the surveillance of human behavior is recently used in medicine fields to quickly detect physical and mental health problems of patients. The complex and the variety presentation of human features in video sequence encourage researches to find the effective presentation. An effective presentation is the most challenging part. It must be invariant to changes of point of view, robust to noise and efficient with a low computation time. In this paper, we propose new model for human behavior analysis which combine transfer learning model and Recurrent Neural Network (RNN). Our model can extract human features from frames using the pre-trained model of Convolutional Neural Network (CNN) the Inception V3. The human features obtained are trained using RNN with Gated Recurrent Unit (GRU). The performance of our proposed architecture is evaluated by three different dataset for human action, UCF Sport, UCF101 and KTH, and achieved good classification accuracy.

Keywords: Deep learning, recurrent neural networks, gated recurrent unit, video classification, convolutional neural network, behavior modelling, activity recognition.

Received December 29, 2018; accepted January 19, 2020

https://doi.org/10.34028/iajit/18/2/1
Full text     
Wednesday, 24 February 2021 04:27

Development and Implementation of a Video Watermarking Method Based on DCT Transform

Ali Benziane1, Suryanti Awang2, and Mohamed Lebcir2

1Faculty of Science and Technology, University of Djelfa, Algeria

2Faculty of Computing, Universiti Malaysia Pahang, Malaysia

Abstract: This paper presents a new color video watermarking technique based on the one-dimensional Discrete Cosine Transform (DCT). This approach uses a differential embedding technique to insert the bits of the watermark into the video frames so that the extraction process is blind and straightforward. To further ensure the security of the method, the binary image watermark is scrambled using Arnold transform before embedded into the video segment. Also, a color space transformation from Red, Green and Blue (RGB) to YUV is performed in order to deal with the color nature of the video segments. The proposed approach exhibits good robustness against a wide range of attacks such as video compression, cropping, Gaussian filtering, and noise adding. Finally, we propose an implementation of the video watermarking technique using the Raspberry Pi 3 platform. Nearly the same remarks may be made as in the simulation resultsconcerning the robustness against video compression attacks.

Keywords: Blind video watermarking, DCT, differential embedding, Raspberry Pi.

Received May 1, 2019; accepted April 8, 2020

Wednesday, 24 February 2021 04:26

GovdeTurk: A Novel Turkish Natural Language Processing Tool for Stemming, Morphological Labelling and Verb Negation

Sait Yucebas1 and Rabia Tintin2

1Computer Engineering Department, Canakkale Onsekiz Mart University, Turkey

2Department of Student Affairs, Canakkale Onsekiz Mart University, Turkey

Abstract: GovdeTurk is a tool for stemming, morphological labeling and verb negation for Turkish language. We designed comprehensive finite automata to represent Turkish grammar rules. Based on these automata, GovdeTurk finds the stem of the word by removing the inflectional suffixes in a longest match strategy. Levenshtein Distance is used to correct spelling errors that may occur during suffix removal. Morphological labeling identifies the functionality of a given token. Nine different dictionaries are constructed for each specific word type. These dictionaries are used in the stemming and morphological labeling. Verb negation module is developed for lexicon based sentiment analysis. GovdeTurk is tested on a dataset of one million words. The results are compared with Zemberek and Turkish Snowball Algorithm. While the closest competitor, Zemberek, in the stemming step has an accuracy of 80%, GovdeTurk gives 97.3% of accuracy. Morphological labeling accuracy of GovdeTurk is 93.6%. With outperforming results, our model becomes foremost among its competitors.

Keywords: Natural language processing, stemming, morphological analysis, Turkish language.

Received June 18, 2019; accepted April 18, 2020

Full text     
Wednesday, 24 February 2021 04:25

An Ontology-based Compliance Audit Framework for Medical Data Sharing across Europe

Hanene Rahmouni1,3, Kamran Munir1, Intidhar Essefi3, Marco Mont2, and Tony Solomonides4

1Department of Computer Science and Creative Technologies, University of the West of England, UK

2Hewlett-Packard Labs, Cloud & Security Lab, UK

3University of Tunis el Manar, the Higher Institute of Medical Technologies of Tunis Research Laboratory of Biophysics and Medical Technologies Tunis, Tunisia

4Outcomes Research Network, Research Institute, NorthShore University Health System, USA

Abstract: Complying with privacy in multi-jurisdictional health domains is important as well as challenging. The compliance management process will not be efficient unless it manages to show evidences of explicit verification of legal requirements. In order to achieve this goal, privacy compliance should be addressed through “a privacy by design” approach. This paper presents an approach to privacy protection verification by means of a novel audit framework. It aims to allow privacy auditors to look at past events of data processing effectuated by healthcare organisation and verify compliance to legal privacy requirements. The adapted approach used semantic modelling and a semantic reasoning layer that could be placed on top of hospital databases. These models allow the integration of fine-grained context information about the sharing of patient data and provide an explicit capturing of applicable privacy obligation. This is particularly helpful for insuring a seamless data access logging and an effective compliance checking during audit trials.

Keywords: Privacy, regulation, verification, audit, compliance, ontology, SWRL, health data, public clouds, GDPR.

Received June 24, 2019; accepted April 15, 2020
Full text     
Wednesday, 24 February 2021 04:22

Improved Intrusion Detection Algorithm based on

TLBO and GA Algorithms

Mohammad Aljanabi1,2 and MohdArfian Ismail2

1College of Education, Aliraqia University, Iraq

2Faculty of Computing, Universiti Malaysia Pahang, Malaysia

Abstract: Optimization algorithms are widely used for the identification of intrusion. This is attributable to the increasing number of audit data features and the decreasing performance of human-based smart Intrusion Detection Systems (IDS) regarding classification accuracy and training time. In this paper, an improved method for intrusion detection for binary classification was presented and discussed in detail. The proposed method combined the New Teaching-Learning-Based Optimization Algorithm (NTLBO), Support Vector Machine (SVM), Extreme Learning Machine (ELM), and Logistic Regression (LR) (feature selection and weighting) NTLBO algorithm with supervised machine learning techniques for Feature Subset Selection (FSS). The process of selecting the least number of features without any effect on the result accuracy in FSS was considered a multi-objective optimization problem. The NTLBO was proposed in this paper as an FSS mechanism; its algorithm-specific, parameter-less concept (which requires no parameter tuning during an optimization) was explored. The experiments were performed on the prominent intrusion machine-learning datasets (KDDCUP’99 and CICIDS 2017), where significant enhancements were observed with the suggested NTLBO algorithm as compared to the classical Teaching-Learning-Based Optimization algorithm (TLBO), NTLBO presented better results than TLBO and many existing works. The results showed that NTLBO reached 100% accuracy for KDDCUP’99 dataset and 97% for CICIDS dataset.

Keywords: TLBO, feature subset selection, NTLBO, IDS, FSS.

Received July 24, 2019; accepted May 9, 2020

Full text    
Wednesday, 24 February 2021 04:20

A New Digital Signature Algorithm for Ensuring the

Data Integrity in Cloud using Elliptic Curves

Balasubramanian Prabhu Kavin1 and Sannasi Ganapathy2

1Sri Ramachandra Institute of Higher Education and Research, India

2Centre for Cyber-Physical Systems and School of Computer Science and Engineering, Vellore Institute of Technology, India

Abstract: In this paper, we propose an Enhanced Digital Signature Algorithm (EDSA) for verifying the data integrity while storing the data in cloud database. The proposed EDSA is developed by using the Elliptic Curves that are generated by introducing an improved equation. Moreover, the proposed EDSA generates two elliptic curves by applying the upgraded equation in this work. These elliptic curve points were used as a public key which is used to perform the signing and verification processes. Moreover, a new base formula is also introduced for performing digital signature operations such as signing, verification and comparison. From the base formula, we have derived two new formulas for performing the signing process and verification process in EDSA. Finally, the proposed EDSA compares the resultant values of the signing and verification processes and it checks the document originality. The experimental results proved that the efficiency of the proposed EDSA in terms of key generation time, signing time and verification time by conducting various experiments.

Keywords: Cloud, digital signature, enhanced digital signature, elliptic curve, signing process, verification process and comparison.

Received August 13, 2019; accepted June 17, 2020

Wednesday, 24 February 2021 04:19

Occlusion-aware Visual Tracker using Spatial Structural Information and Dominant Features

Rongtai Cai1 and Peng Zhu2

1Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, China

2Fujian Newland Computer Co Ltd., China

Abstract: To overcome the problem of occlusion in visual tracking, this paper proposes an occlusion-aware tracking algorithm. The proposed algorithm divides the object into discrete image patches according to the pixel distribution of the object by means of clustering. To avoid the drifting of the tracker to false targets, the proposed algorithm extracts the dominant features, such as color histogram or histogram of oriented gradient orientation, from these image patches, and uses them as cues for tracking. To enhance the robustness of the tracker, the proposed algorithm employs an implicit spatial structure between these patches as another cue for tracking; Afterwards, the proposed algorithm incorporates these components into the particle filter framework, which results in a robust and precise tracker. Experimental results on color image sequences with different resolutions show that the proposed tracker outperforms the comparison algorithms on handling occlusion in visual tracking.

Keywords: Visual tracking, feature fusion, occlusion-aware tracking, particle filter, part-based tracking.

Received September 9, 2019; accepted October 5, 2020

Wednesday, 24 February 2021 04:18

Support Vector Machine with Information Gain Based Classification for Credit Card Fraud Detection System

Kannan Poongodi and Dhananjay Kumar

Department of Information Technology, Anna University, MIT Campus, Chennai, India

Abstract: In the credit card industry, fraud is one of the major issues to handle as sometimes the genuine credit card customers may get misclassified as fraudulent and vice-versa. Several detection systems have been developed but the complexity of these systems along with accuracy and precision limits its usefulness in fraud detection applications. In this paper, a new methodology Support Vector Machine with Information Gain (SVMIG) to improve the accuracy of identifying the fraudulent transactions with high true positive rate for the detection of frauds in credit card is proposed. In SVMIG, the min-max normalization is used to normalize the attributes and the feature set of the attributes are reduced by using information gain based attribute selection. Further, the Apriori algorithm is used to select the frequent attribute set and to reduce the candidate’s itemset size while detecting fraud. The experimental results suggest that the proposed algorithm achieves 94.102% higher accuracy on the standard dataset compared to the existing Bayesian and random forest based approaches for a large sample size in dealing with legal and fraudulent transactions.

Keywords: Apriori algorithm, credit card fraud detection, information gain, support vector machine.

Received March 5, 2020; accepted September 7, 2020

Wednesday, 24 February 2021 03:17

Parallelization of Frequent Itemset Mining Methods with FP-tree: An Experiment with PrePost+ Algorithm

Olakara Jamsheela1 and Raju Gopalakrishna2

1EMEA College of Arts and Science, Calicut University, India

2Computer Science and Engineering, CHRIST (Deemed to be University), India

Abstract: Parallel processing has turn to be a common programming practice because of its efficiency and thus becomes an interesting field for researchers. With the introduction of multi- core processors as well as general purpose graphics processing units, parallel programming has become affordable. This leads to the parallelization of many of the complex data processing algorithms including algorithms in data mining. In this paper, a study on parallel PrePost+ is presented. PrePost+ is an efficient frequent itemset mining algorithm. The algorithm has been modified as a parallel algorithm and the obtained result is compared with the result of sequential PrePost+ algorithm.

Keywords: Data Mining algorithm, parallelization of PrePost+, parallel processing, multicore.

Received April 6, 2020; accepted August 26, 2020

Wednesday, 24 February 2021 03:13

An Additive Sparse Logistic Regularization Method for Cancer Classification in Microarray Data

Vijay Suresh Gollamandala1 and Lavanya Kampa1,2

1Department of Computer Science and Engineering, Lakireddy Bali Reddy College of Engineering, India

2Department of Information Technology, Lakireddy Bali Reddy College of Engineering, India

Abstract: Now a day’s cancer has become a deathly disease due to the abnormal growth of the cell. Many researchers are working in this area for the early prediction of cancer. For the proper classification of cancer data, demands for the identification of proper set of genes by analyzing the genomic data. Most of the researchers used microarrays to identify the cancerous genomes. However, such kind of data is high dimensional where number of genes are more compared to samples. Also the data consists of many irrelevant features and noisy data. The classification technique deal with such kind of data influences the performance of algorithm. A popular classification algorithm (i.e., Logistic Regression) is considered in this work for gene classification. Regularization techniques like Lasso with L1 penalty, Ridge with L2 penalty, and hybrid Lasso with L1/2+2 penalty used to minimize irrelevant features and avoid overfitting. However, these methods are of sparse parametric and limits to linear data. Also methods have not produced promising performance when applied to high dimensional genome data. For solving these problems, this paper presents an Additive Sparse Logistic Regression with Additive Regularization (ASLR) method to discriminate linear and non-linear variables in gene classification. The results depicted that the proposed method proved to be the best-regularized method for classifying microarray data compared to standard methods.

Keywords: Microarray data, sparse regularization, feature selection, logistic regression, and lasso.

Received April 30, 2020; accepted September 17, 2020

https://doi.org/10.34028/iajit/18/2/10

 Full text      

 

Wednesday, 24 February 2021 03:12

Machine Learning in OpenFlow Network:

Comparative Analysis of DDoS Detection

Techniques

Arun Kumar Singh

College of Computing and Informatics, Saudi Electronic University, Kingdom of Saudi Arabia


Abstract: Software Defined Network (SDN) allows the separation of a control layer and data forwarding at two different layers. However, centralized control systems in SDN is vulnerable to attacks namely Distributed Denial of Service (DDoS). Therefore, it is necessary for developing a solution based on reactive applications that can identify, detect, as well as mitigate the attacks comprehensively. In this paper, an application has been built based on machine learning methods including, Support Vector Machine (SVM) using Linear and Radial Basis Function kernel, K-Nearest Neighbor (KNN), Decision Tree (DTC), Random Forest (RFC), Multi-Layer Perceptron (MLP), and Gaussian Naïve Bayes (GNB). The paper also proposed a new scheme of DDOS dataset in SDN by gathering considerably static data form using the port statistic. SVM became the most efficient method for identifying DDoS attack successfully proved by the accuracy, precision, and recall approximately 100 % which could be considered as the primary algorithm for detecting DDoS. In term of the promptness, KNN had the slowest rate for the whole process, while the fastest was depicted by GNB.

Keyword: Support vector machine, software defined network, machine learning, distributed Dos, detection.

Received May 6, 2020; accepted September 9, 2020

Full text    
Wednesday, 24 February 2021 03:10

A New Parallel Fuzzy Multi Modular Chaotic

Logistic Map for Image Encryption

Mahmoud Gad1, Esam Hagras2, Hasan Soliman1, and Noha Hikal1

1Faculty of Computers and Information Sciences, Mansoura University, Egypt

2Faculty of Engineering, Delta University for Science and Technology, Egypt

Abstract: This paper introduces a new image encryption algorithm based on a Parallel Fuzzy Multi-Modular Chaotic Logistic Map (PFMM-CLM). Firstly, a new hybrid chaotic system is introduced by using four parallel cascade chaotic logistic maps with a dynamic parameter control to achieve a high Lyapunov exponent value and completely chaotic behavior of the bifurcation diagram. Also, the fuzzy set theory is used as a fuzzy logic selector to improve chaotic performance. The proposed algorithm has been tested as a Pseudo-Random Number Generator (PRNG). The randomness test results indicate that system has better performance and satisfied all random tests. Finally, the Arnold Cat Map with controllable iterative parameters is used to enhance the confusion concept. Due to excellent chaotic properties and good randomization test results, the proposed chaotic system is used in image encryption applications. The simulation and security analysis indicate that this proposed algorithm has a very high security performance and complexity.

Keywords: Image encryption, parallel multi modular chaotic maps, pseudo-random number generation, fuzzy logic selector.

Received August 24, 2019; accepted September 7, 2020
https://doi.org/10.34028/iajit/18/2/12

 Full text   

Wednesday, 24 February 2021 03:04

Ciphertext-Only Attack on RSA Using Lattice

Basis Reduction

Anas Ibrahim1,2, Alexander Chefranov1, and Rushdi Hamamreh3
1
Computer Engineering Department, Eastern Mediterranean University, North Cyprus
2Computer Engineering Department, Palestine Technical University, Palestine
3Computer Engineering Department, Al-Quds University, Palestine

Abstract: We use lattice basis reduction for ciphertext-only attack on RSA. Our attack is applicable in the conditions when known attacks are not applicable, and, contrary to known attacks, it does not require prior knowledge of a part of a message or key, small encryption key, , or message broadcasting. Our attack is successful when a vector, comprised of a message and its exponent, is likely to be the shortest in the lattice, and meets Minkowski's Second Theorem bound. We have conducted experiments for message, keys, and encryption/decryption keys with sizes from 40 to 8193 bits, with dozens of thousands of successful RSA cracks. It took about 45 seconds for cracking 2001 messages of 2050 bits and for large public key values related with Euler’s totient function, and the same order private keys. Based on our findings, for RSA not to be susceptible to the proposed attack, it is recommended avoiding RSA public key form used in our experiments.

Keywords: Ciphertext-only attack, encryption key, euler’s totient function, Gaussian lattice basis reduction, RSA, shortest vector problem.

Received May 13, 2020; accepted September 28, 2020
https://doi.org/10.34028/iajit/18/2/13

 Full text  

Wednesday, 24 February 2021 02:31

Survey on Software Changes: Reasons and

Remedies

Ibrahim Assi1, Rami Tailakh2, and Abdelsalam Sayyad1

1Joint Master in Software Engineering, Birzeit University, Palestine

 2Mashvisor Real Estate Advisory, Palestine

Abstract: Software systems play a key role in most businesses nowadays. Building robust, reliable and scalable software systems require going through a software production cycle (or process). However, it has been noticed that software systems are subjected to changes, whether those amendments are important or not. Those changes to software systems are viewed as a considerable issue in software engineering; they are considered as a burden and cost a lot, especially in cases such as enterprises and large-scale software systems. This study aims to identify the reasons that cause software changes and suggest remedies for those reasons. We survey the opinions of experts such as technical managers, team leaders, and senior developers. We collected 81 responses to our questionnaire, which aimed to establish common software development practices in the local industry. We also conducted 16 semi-structured interviews targeting the most senior experts, in which we directly discussed the reasons and remedies for software changes. Our results highlight the most influential reasons that cause changes to software systems, such as changes to user requirements, requests for new features, software development methodology, solving bugs, refactoring, and weak user experience design. Most importantly, the study solicited solutions that can reduce the need for software changes.

Keywords: Software changes, software maintenance, empirical study, survey, questionnaire, interviews.

Received May 12, 2019; accepted April 8, 2020

https://doi.org/10.34028/iajit/18/2/14
 
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…