Monday, 23 December 2019 04:44

New Image Watermarking Algorithm Based on DWT and Pixel Movement Function PMF

Razika Souadek and Naceur-Eddine Boukezzoula

Department of Electronic, University of Setif 1, Republic Algeria

Abstract: In this paper, we propose a new algorithm of image watermarking based on Discrete Wavelet Transform (DWT) including a function for pixels movement. The proposed algorithm uses DWT of two levels in order to compact a higher energy in component LL1, and Contrast Sensitivity Function (CSF) to improve the invisibility and robustness, the new Function of Pixel Movement (PMF) is applied to increase the security properties. Pixel Movement Function (PMF) is a function of N iteration inside each block, this function required a changeable key K calculated in each iteration N for the position of each block. Numerical experiments are performed to demonstrate that the proposed method can improve watermarking quality in terms of imperceptibility of watermark, capacity of insertion and robustness against different attacks such as Joint Photographic Experts Group (JPEG) compression, noise addition and geometrical attacks.

Keywords: Wavelet transforms, image watermarking, image quality evaluation.

Received June 8, 2016; accepted May 7, 2018
https://doi.org/10.34028/iajit/17/1/1

Full text   
Monday, 23 December 2019 04:43

A Novel Evidence Distance in Power Set Space

Lei Zheng1, Jiawei Zou1, Baoyu Liu1, Yong Hu2, and Yong Deng2

1College of Information Science and Technology, Jinan University, China

2Big Data Decision Institute, Jinan University, China

Abstract: Distance measure of evidence presented has been used to measure the similarity of two bodies of evidence. However, it is not considered that the probability distribution on a power set is able to assign to its subsets not only single elements. In this paper a novel approach is proposed to measure the distance of evidence. And some properties that the novel approach has, such as nonnegativity, symmetry, triangular inequality, downward compatibility and higher sensitivity, is proved. Numerical example and real application are used to strictly illustrate the efficiency of the new distance.

Keywords: Evidence theory, evidence distance, data function, target recognition system.

Received February 18, 2017; accepted November 27, 2017

https://doi.org/10.34028/iajit/17/1/2

Full text      

Monday, 23 December 2019 04:34

A Neuro-Fuzzy System to Detect IPv6 Router

Alert Option DoS Packets

Shubair Abdullah

 Instructional and Learning Technology, Sultan Qaboos University, Oman

Abstract: Detecting the denial of service attacks that solely target the router is a maximum security imperative in deploying IPv6 networks. The state-of-the-art Denial of Service detection methods aim at leveraging the advantages of flow statistical features and machine learning techniques. However, the detection performance is highly affected by the quality of the feature selector and the reliability of datasets of IPv6 flow information. This paper proposes a new neuro-fuzzy inference system to tackle the problem of classifying the packets in IPv6 networks in crucial situation of small-supervised training dataset. The proposed system is capable of classifying the IPv6 router alert option packets into denial of service and normal by utilizing the neuro-fuzzy strengths to boost the classification accuracy. A mathematical analysis from the fuzzy sets theory perspective is provided to express performance benefit of the proposed system. An empirical performance test is conducted on comprehensive dataset of IPv6 packets produced in a supervised environment. The result shows that the proposed system overcomes robustly some state-of-the-art systems.

Keywords: DoS attacks, IPv6 router alert option, Neuro-Fuzzy, IPv6 network security.

Received February 23, 2017; accepted July 8, 2018

https://doi.org/10.34028/iajit/17/1/3

Full text     
Monday, 23 December 2019 04:32

Machine Learning Based Prediction of Complex

Bugs in Source Code

Ishrat-Un-Nisa Uqaili and Syed Nadeem Ahsan

Department of Computer Science, Iqra University, Karachi

Abstract: During software development and maintenance phases, the fixing of severe bugs are mostly very challenging and needs more efforts to fix them on a priority basis. Several research works have been performed using software metrics and predict fault-prone software module. In this paper, we propose an approach to categorize different types of bugs according to their severity and priority basis and then use them to label software metrics’ data. Finally, we used labeled data to train the supervised machine learning models for the prediction of fault prone software modules. Moreover, to build an effective prediction model, we used genetic algorithm to search those sets of metrics which are highly correlated with severe bugs.

Keywords: Software bugs, software metrics, machine learning, fault prediction model.


Received March 28, 2017; accepted June 8, 2017

https://doi.org/10.34028/iajit/17/1/4

Full text    

Monday, 23 December 2019 04:31

Designing Punjabi Poetry Classifiers Using Machine Learning and Different Textual Features

Jasleen Kaur1 and Jatinderkumar Saini2

1Department of Computer Engineering, PP Savani University, India

2Symbiosis Institute of Computer Studies and Research, India

Abstract: Analysis of poetic text is very challenging from computational linguistic perspective. Computational analysis of literary arts, especially poetry, is very difficult task for classification. For library recommendation system, poetries can be classified on various metrics such as poet, time period, sentiments and subject matter. In this work, content-based Punjabi poetry classifier was developed using Weka toolset. Four different categories were manually populated with 2034 poems Nature and Festival (NAFE), Linguistic and Patriotic (LIPA), Relation and Romantic (RORE), Philosophy and Spiritual (PHSP) categories consists of 505, 399, 529 and 601 numbers of poetries, respectively. These poetries were passed to various pre-processing sub phases such as tokenization, noise removal, stop word removal, and special symbol removal. 31938 extracted tokens were weighted using Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) weighting scheme. Based upon poetry elements, three different textual features (lexical, syntactic and semantic) were experimented to develop classifier using different machine learning algorithms. Naive Bayes (NB), Support Vector Machine, Hyper pipes and K-nearest neighbour algorithms were experimented with textual features. The results revealed that semantic feature performed better as compared to lexical and syntactic. The best performing algorithm is SVM and highest accuracy (76.02%) is achieved by incorporating semantic information associated with words.

Keywords: Classification, naïve bayes, hyper pipes, k-nearest neighbour, Punjabi, poetry, support vector machine, word net.

Received April 7, 2017; accepted July 8, 2018

https://doi.org/10.34028/iajit/17/1/5

Full text   

Monday, 23 December 2019 04:29

Extracting Word Synonyms from Text

using Neural Approaches

Nora Mohammed

College of Engineering, Al-Qadisiyah University, Iraq

Abstract: Extracting synonyms from textual corpora using computational techniques is an interesting research problem in the Natural Language Processing (NLP) domain. Neural techniques (such as Word2Vec) have been recently utilized to produce distributional word representations (also known as word embeddings) that capture semantic similarity/relatedness between words based on linear context. Nevertheless, using these techniques for synonyms extraction poses many challenges due to the fact that similarity between vector word representations does not indicate only synonymy between words, but also other sense relations as well as word association or relatedness. In this paper, we tackle this problem using a novel 2-step approach. We first build distributional word embeddings using Word2Vec then use the induced word embeddings as an input to train a feed-forward neutral network using annotated dataset to distinguish between synonyms and other semantically related words.

Keywords: Neural networks, semantic similarity, word representations, natural language processing.

Received April 17, 2017; accepted October 24, 2017

https://doi.org/10.34028/iajit/17/1/6

Full text  

 

Monday, 23 December 2019 04:28

Privacy-Preserving for Distributed Data Streams: Towards l-Diversity

Mona Mohamed, Sahar Ghanem, and Magdy Nagi

Computer and Systems Engineering Department, Alexandria University, Egypt

Abstract: Privacy-preserving data publishing have been studied widely on static data. However, many recent applications generate data streams that are real-time, unbounded, rapidly changing, and distributed in nature. Recently, few work addressed k-anonymity and l-diversity for data streams. Their model implied that if the stream is distributed, it is collected at a central site for anonymization. In this paper, we propose a novel distributed model where distributed streams are first anonymized by distributed (collecting) sites before merging and releasing. Our approach extends Continuously Anonymizing STreaming data via adaptive cLustEring (CASTLE), a cluster-based approach that provides both k-anonymity and l-diversity for centralized data streams. The main idea is for each site to construct its local clustering model and exchange this local view with other sites to globally construct approximately the same clustering view. The approach is heuristic in a sense that not every update to the local view is sent, instead triggering events are selected for exchanging cluster information. Extensive experiments on a real data set are performed to study the introduced Information Loss (IL) on different settings. First, the impact of the different parameters on IL are quantified. Then k-anonymity and l-diversity are compared in terms of messaging cost and IL. Finally, the effectiveness of the proposed distributed model is studied by comparing the introduced IL to the IL of the centralized model (as a lower bound) and to a distributed model with no communication (as an upper bound). The experimental results show that the main contributing factor to IL is the number of attributes in the quasi-identifier (50%-75%) and the number of sites contributed about 1% and this proves the scalability of the proposed approach. In addition, providing l-diversity is shown to introduce about 25% increase in IL when compared to k-anonymity. Moreover, 35% reduction in IL is achieved by messaging cost (in bytes) of about 0.3% of the data set size.

Keywords: k-anonymity, l-diversity, data streams and clustering.

Received April 20, 2017; accepted December 18, 2017

https://doi.org/10.34028/iajit/17/1/7

Full text  

Monday, 23 December 2019 04:27

Generating Sequence Diagrams from Arabic User Requirements using MADA+TOKAN Tool

Nermeen Alami, Nabil Arman, and Faisal Khamayseh

Department of Computer Science, Palestine Polytechnic University, Palestine

Abstract: A new semi-automated approach for generating sequence diagrams from Arabic user requirements is presented. In this novel approach, the Arabic user requirements are parsed using a natural language processing tool called MADA+TOKAN to generate the Part Of Speech (POS) tags of the parsed user requirements, then a set of heuristics are applied on the resulted tags to obtain the sequence diagram components; objects, messages and work flow transitions (messages). The generated sequence diagram is expressed using Extensible Markup Language (XMI) to be drawn using sequence diagrams drawing tools. Our approach achieves better results than students in generating sequence diagrams. It also has better accuracy in generating the participants and less accuracy in generating messages exchanged between participants. The proposed approach is validated using a set of experiments involving a set of real cases evaluated by a group of software engineers and a group of graduate students who are familiar with sequence diagrams.

Keywords: UML, automated software engineering, sequence diagram, Arabic user requirements.

Received May 18, 2017; accepted December 6, 2018

  https://doi.org/10.34028/iajit/17/1/8

Full text   

Monday, 23 December 2019 04:25

An Improved Grey Wolf Optimization Algorithm

Based Task Scheduling in Cloud

Computing Environment

Gobalakrishnan Natesan1 and Arun Chokkalingam2

1Department of Information Technology, Sathyabama University, India

2Department of Electronics and Communication Engineering, R.M.K College of Engineering and Technology, India

Abstract: The demand for massive computing power and storage space has been escalating in various fields and in order to satisfy this need a new technology known as cloud computing is introduced. The capability of providing these services effectively and economically has made cloud computing technology more popular. With the advent of virtualization, IT services being offered have started to shift to cloud computing. Virtualization had paved way for resource availability in an inexhaustible manner. As Cloud Computing is still at its unrefined form and to derive its full potential more analysis is needed. The way in which resources and tasks get allocated in cloud environment requires more analysis. This in turn accounts for the Quality of Services (QoS) of the services offered by cloud service providers. This paper proposes to simulate the Performance-Cost Grey Wolf Optimization (PCGWO) algorithm based to achieve optimization in the process of allocation of resources and tasks in cloud computing domain using CloudSim toolkit. The main purpose is to lower both the processing time and cost in accordance to objective function. The superiority of proposed technique is evident from the simulation results that show a comprehensive reduction in task completion time and cost. Also using this technique more no. of tasks can be efficiently completed within the deadline. Thus the results indicate that in accordance to performance the PCGWO method fares better than existing algorithms.

Keywords: Virtualization, cloud computing, GWO, task scheduling, optimization, resource, CloudSim and QoS.

Received July 8, 2017; accepted September 13, 2017

https://doi.org/10.34028/iajit/17/1/9

Full text  

Monday, 23 December 2019 04:21

Training Convolutional Neural Network for

Sketch Recognition on Large-Scale Dataset

Wen Zhou1 and Jinyuan Jia2

1School of Computer and Information, Anhui Normal University, China

2School of Software Engineering, Tongji University, China

Abstract: With the rapid development of computer vision technology, increasingly more focus has been put on image recognition. More specifically, a sketch is an important hand-drawn image that is garnering increased attention. Moreover, as handheld devices such as tablets, smartphones, etc. have become more popular, it has become increasingly more convenient for people to hand-draw sketches using this equipment. Hence, sketch recognition is a necessary task to improve the performance of intelligent equipment. In this paper, a sketch recognition learning approach is proposed that is based on the Visual Geometry Group16 Convolutional Neural Network (VGG16 CNN). In particular, in order to diminish the effect of the number of sketches on the learning method, we adopt a strategy of increasing the quantity to improve the diversity and scale of sketches. Initially, sketch features are extracted via the pretrained VGG16 CNN. Additionally, we obtain contextual features based on the traverse stroke scheme. Then, the VGG16 CNN is trained using a joint Bayesian method to update the related network parameters. Moreover, this network has been applied to predict the labels of input sketches in order to automatically recognize the label of a sketch. Last but not least, related experiments are conducted, and the comparison of our method with the state-of-the-art methods is performed, which shows that our approach is superior and feasible.

Keywords: Sketch recognition, VGG16 convolutional neural network, contextual features, strokes traverse, joint Bayesian.


Received September 5,2017;Accepted April 28, 2019

https://doi.org/10.34028/iajit/17/1/10

Full text   

Monday, 23 December 2019 04:17

A Novel Amended Dynamic Round Robin Scheduling Algorithm for Timeshared Systems

Uferah Shafi1, Munam Shah1, Abdul Wahid1, Kamran Abbasi2, Qaisar Javaid3, Muhammad Asghar4, and Muhammad Haider1

1Department of Computer Science, COMSATS University Islamabad, Pakistan

2Department of Distance Continuing and Computer Education, University of Sindh, Pakistan

3Department of Computer Science, International Islamic University, Pakistan

4Department of Computer Science, Bahuddin Zikriya University, Pakistan

 

Abstract: Central Processing Unit (CPU) is the most significant resource and its scheduling is one of the main functions of an operating system. In timeshared systems, Round Robin (RR) is most widely used scheduling algorithm. The efficiency of RR algorithm is influenced by the quantum time, if quantum is small, there will be overheads of more context switches and if quantum time is large, then given algorithm will perform as First Come First Served (FCFS) in which there is more risk of starvation. In this paper, a new CPU scheduling algorithm is proposed named as Amended Dynamic Round Robin (ADRR) based on CPU burst time. The primary goal of ADRR is to improve the conventional RR scheduling algorithm using the active quantum time notion. Quantum time is cyclically adjusted based on CPU burst time. We evaluate and compare the performance of our proposed ADRR algorithm based on certain parameters such as, waiting time, turnaround time etc. and compare the performance of our proposed algorithm. Our numerical analysis and simulation results in MATLAB reveals that ADRR outperforms other well-known algorithms such as conventional Round Robin, Improved Round Robin (IRR), Optimum Multilevel Dynamic Round Robin (OMDRR) and Priority Based Round Robin (PRR).

Keywords: CPU, scheduling algorithm, round robin scheduling FCFS, ADRR.

Received August 10, 2015; accepted February 15, 2017

https://doi.org/10.34028/iajit/17/1/11

Full text  

Monday, 23 December 2019 04:16

Modelling and Verification of ARINC 653 Hierarchical Preemptive Scheduling

Ning Fu, Lijun Shan, Chenglie Du, Zhiqiang Liu, and Han Peng

School of Computer Science, Northwestern Polytechnical University, China

Abstract: Avionics Application Standard Software Interface (ARINC 653) is a software specification for space and time partitioning in safety-critical avionics real-time operating systems. Correctly designed task schedulers are crucial for ARINC 653 running systems. This paper proposes a model-checking-based method for analyzing and verifying ARINC 653 scheduling model. Based on priced timed automata theory, an ARINC 653 scheduling system was modelled as a priced timed automata network. The schedulability of the system was described as a set of temporal logic expressions, and was analyzed and verified by a model checker. Our research shows that it is feasible to use model checking to analyze task schedulability in an ARINC 653 hierarchical scheduling system. The method discussed modelled preemptive scheduling by using the stop/watch features of priced timed automata. Unlike traditional scheduling analysis techniques, the proposed approach uses an exhaustive method to automate analysis of the schedulability of a system, resulting in a more precise analysis.

Keywords: ARINC653, schedulability analysis, model checking, UPPAAL.

Received June 15, 2016; accepted March 19, 2019

https://doi.org/10.34028/iajit/17/1/12

Full text 

Monday, 23 December 2019 04:12

Large Universe Ciphertext-Policy Attribute-Based Encryption with Attribute Level User Revocation in Cloud Storage

Huijie Lian1, Qingxian Wang2, and Guangbo Wang1

1Zhengzhou Information Science and Technology Institute, Zhengzhou

231008 army, Beijing

Abstract: Ciphertext-Policy Attribute-Based Encryption (CP-ABE), especially large universe CP-ABE that is not bounded with the attribute set, is getting more and more extensive application in the cloud storage. However, there exists an important challenge in original large universe CP-ABE, namely dynamic user and attribute revocation. In this paper, we propose a large universe CP-ABE with efficient attribute level user revocation, namely the revocation to an attribute of some user cannot influence the common access of other legitimate attributes. To achieve the revocation, we divide the master key into two parts: delegation key and secret key, which are sent to the cloud provider and user separately. Note that, our scheme is proved selectively secure in the standard model under "q-type" assumption. Finally, the performance analysis and experimental verification have been carried out in this paper, and the experimental results show that, compared with the existing revocation schemes, although our scheme increases the computational load of storage Service Provider (CSP) in order to achieve the attribute revocation, it does not need the participation of Attribute Authority (AA), which reduces the computational load of AA. Moreover, the user does not need any additional parameters to achieve the attribute revocation except of the private key, thus saving the storage space greatly.

Keywords: Ciphertext-policy attribute-based encryption, outsourced decryption, large universe, attribute level user revocation.

Received February 12, 2017; accepted May 10, 2017

https://doi.org/10.34028/iajit/17/1/13

Full text   
Monday, 23 December 2019 04:11

Face Identification based Bio-Inspired Algorithms

Sanaa Ghouzali1 and Souad Larabi2

1Department of Information Technology, King Saud University, Saudi Arabia

2Computer Science Department, Prince Sultan University, Saudi Arabia

Abstract: Most biometric identification applications suffer from the curse of dimensionality as the database size becomes very large, which could negatively affect both the identification performance and speed. In this paper, we use Projection Pursuit (PP) methods to determine clusters of individuals. Support Vector Machine (SVM) classifiers are then applied on each cluster of users separately. PP clustering is conducted using Friedman and Kurtosis projection indices optimized by Genetic Algorithm and Particle Swarm Optimization methods. Experimental results obtained using YALE face database showed improvement in the performance and speed of face identification system.

Keywords: Support vector machine, projection pursuit, particle swarm optimization, genetic algorithms, Kurtosis index, Friedman index.


Received February 23, 2017; accepted June 12, 2017

https://doi.org/10.34028/iajit/17/1/14

Full text  

Monday, 23 December 2019 04:09

Improved Steganography Scheme based on Fractal Set

Mohammad Alia1 and Khaled Suwais2

1Faculty of Sciences and Information Technology, Al-Zaytoonah University of Jordan, Jordan

2Faculty of Computer Studies, Arab Open University, Saudi Arabia

Abstract: Steganography is the art of hiding secret data inside digital multimedia such as image, audio, text and video. It plays a significant role in current trends for providing secure communication and guarantees accessibility of secret information by authorised parties only. The Least-Significant Bit (LSB) approach is one of the important schemes in steganography. The majority of LSB-based schemes suffer from several problems due to distortion in a limited payload capacity for stego-image. In this paper, we have presented an alternative steganographic scheme that does not rely on cover images as in existing schemes. Instead, the image which includes the secure hidden data is generated as an image of a curve. This curve is resulted from a series of computation that is carried out over the mathematical chaotic fractal sets. The new scheme aims at improving the data concealing capacity, since it achieves limitless concealing capacity and disposes of the likelihood of the attackers to realise any secret information from the resulted stego-image. From the security side, the proposed scheme enhances the level of security as the scheme depends on the exact matching between secret information and the generated fractal (Mandelbrot-Julia) values. Accordingly, a key stream is created based on these matches. The proposed scheme is evaluated and tested successfully from different perspectives.

Keywords: Steganography, data hiding, security, Julia set, Mandelbrot set, and fractal set.

Received July 10, 2017; accepted December 17, 2017

https://doi.org/10.34028/iajit/17/1/15

Monday, 23 December 2019 04:02

A Combined Method of Skin-and Depth-based Hand

Gesture Recognition

Tukhtaev Sokhib1 and Taeg Keun Whangbo2

1Department of IT Convergence Engineering, Gachon University, Korea

2Department of Computer Science, Gachon University, Korea

 

Abstract: Kinect is a promising acquisition device that provides useful information on a scene through color and depth data. There has been a keen interest in utilizing Kinect in many computer vision areas such as gesture recognition. Given the advantages that Kinect provides, hand gesture recognition can be deployed efficiently with minor drawbacks. This paper proposes a simple and yet efficient way of hand gesture recognition via segmenting a hand region from both color and depth data acquired by Kinect v1. The Inception model of the image recognition system is used to check the reliability of the proposed method. Experimental results are derived from a sample dataset of Microsoft Kinect hand acquisitions. Under the appropriate conditions, it is possible to achieve high accuracy in close to real time.

Keywords: Gesture recognition, Microsoft Kinect, inception model, depth.


Received September 21, 2017; accepted September 23, 2018

https://doi.org/10.34028/iajit/17/1/16

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…