Wednesday, 04 November 2015 04:45

Secure Verification Technique for Defending IP Spoofing Attacks

Alwar Rengarajan1, Rajendran Sugumar2, and Chinnappan Jayakumar3

1, 2Department of Computer Science and Engineering, Veltech Multitech SRS Engineering College

3Department of Computer Science and Engineering, RMK Engineering College

Abstract: The Internet Protocol (IP) is the source of Internet transmission but an inadequate authentication technique paves a way for IP spoofing. Since IP spoofing initiates more attacks in the network, a secure authentication technique is required to defend against spoofing attacks. In order to achieve this, in this paper, we propose a secure verification technique for defending IP spoofing attacks. Our technique authenticates IP address of each Autonomous System (AS) using Neighbour Authentication (NA) algorithm. The algorithm works by validating NA table NA constructed by each node. The NA table is transmitted securely using RC-6 encryption algorithm. In addition to encryption, our technique use arithmetic coding to compress and decompress the NA table. The performance of our technique is proved through simulation results. Our technique incurs low overhead and significantly improves the performance of network.

Keywords: IP, IP spoofing, secure verification technique.

Received September 17, 2012; accepted July 20, 2014

Full Text

 

 

Wednesday, 06 May 2015 08:50

Analyzing Learning Concepts in Intelligent Tutoring Systems

Korhan Gunel1, Refet Polat2, and Mehmet Kurt3

1Department of Mathematics, Adnan Menderes University, Turkey.

2Department of Mathematics, Yasar University, Turkey.

3Department of Mathematics and Computer Science, Izmir University, Turkey

 

Abstract: The information that is increasing and changing rapidly at the present day, and the usage of computers in educational and instructional processes has become inevitable. With the rapid progress in technology, research gives more importance to integrate intelligent issues with educational support systems such as distance learning and learning management systems. Such studies are considered as applications of the artificial intelligence on educational processes.

Regarding this viewpoint, some supervised learning models which is able to recognize the learning concepts from a given educational content presented to a tutoring system has been designed, in this study. For this aim, firstly, three different corpora constructed from educational contents related to the subject titles such as calculus, Abstract Algebra and Computer Science have been composed. For each candidate learning concepts, the feature vectors have been generated using a relation factor in addition to tf-idf values. The relation factor is defined as the ratio of the total number of the most frequent substrings in the corpus that appear with a candidate concept in the same sentence within an educational content to most frequentsubstring in the corpus. The achievement of this system is measured according to the f-measure.

Keywords: Educational technology, artificial intelligence on education, machine learning, intelligent tutoring

Full Text 


 

 

Wednesday, 06 May 2015 04:26

A Group based Fault Tolerant Scheduling Mechanism to Improve the Application

Turnaround Time on Desktop Grids

 

Mohammed Khan1, Irfan Hyder2, Ghayas Ahmed2 and Saira Begum2

1PAF-Karachi Institute of Economics and Technology, Pakistan

2Institute of Business Management, Pakistan

  Abstract: Desktop grid is an exciting discipline for high throughput applications but due to inherent resource volatility, desktop grids are not feasible for short lived applications that require rapid turnaround time. Efficient and more Knowledgeable resource selection mechanism can make it possible. In this paper, we propose a group based resource scheduling mechanism. The groups are made by using three measures: Collective impact of CPU and RAM, spot checking and task completion history. We evaluated the proposed mechanism over a network of 900 nodes having varied resources and behavior and found that excluding desktop resources on the basis of just clock rates is not a good idea and RAM should also, be considered as a collective parameter besides spot checking and task completion history. We also, show that the appropriate scheduling mechanisms can only be implemented after the grouping of resources on computing strength and behavior. The proposed mechanism ensures that tasks are allocated to hosts with higher probability of tasks completion that reduces tasks failures and improves fault tolerance.

Keywords: Scheduling mechanism, fault tolerance, desktop grids.

 

Received May 15, 2013; accepted September 19, 2014

Full Text

 


 

Sunday, 26 April 2015 07:10

Investigation and Analysis of Research Gate User’s Activities using Neural Networks

  Omar Alheyasat

  Department of Computer Engineering, Al-Balqa' Applied University, Jordan

Abstract: Online Social Networks (OSNs) have been proliferating in the past decade as general-purpose public networks. Billions of user’s are currently subscribing by uploading, downloading, sharing opinions and blogging. Private OSNs emerged to tackle this issue. Research Gate (RG) is considered as one of the most popular private academic social networks for developers and researches in the internet. The current study consists of two parts. The first part is a measurement study of user’s activities in RG and second part deals with the relationship between user’s profile data and their links. To this end, a sample of one million RG user’s records was. To facilitate this analysis, three layers back-propagation neural net- work models were generated. The purpose of this network is to show the correlation between user profiles data and the number of their followers. The results show that there is a high positive relationship between user’s followers and research activities 'publication, impact factor, total number of publication views and citation'. In addition, the results indicated that the number of questions and answers (activity) of a user have low correlation with the corresponding followers. The present results demonstrate that the question/answer contributions of researchers are limited, which therefore, needs more collaboration from the RG researchers.

Keywords: RG, neural networks, OSN, regression, measurement, crawling, follower.

  Received December 20, 2013; accepted December 23, 2014

Full Text


 

Wednesday, 01 April 2015 07:46

Predicting the Existence of Design Patterns based on Semantics and Metrics

Imène Issaoui1, Nadia Bouassida2 and Hanêne Ben-Abdallah3

1Institut Preparatory to engineering studies of the University of Monastir, Tunisia

2Department of Computer Science, University of Sfax, Tunisia

3Faculty of Computing and Information Technology, King Abdulaziz University, KSA

Abstract: As part of the reengineering process, the identification of design patterns offers important information to the designer. In fact, the identification of implemented design patterns could be useful for the comprehension of an existing design and provides the grounds for further code/design improvements. However, existing pattern detection approaches generally have problems in detecting patterns in an optimal manner. They either detect exact pattern instantiations or have no guidelines in deciding which pattern to look for first amongst the various patterns. To overcome these two limitations, we propose to optimize any pattern detection approach by preceding it by a preliminary “sniffing” step that detects the potential existence of patterns and orders the candidate patterns in terms of their degree of resemblance to design fragments. Our approach uses design metrics to characterize the structure and semantics of the various design patterns.

Keywords: Metrics, design pattern, quality assurance, sniffer.

Received September 3, 2013; accepted April 27, 2014

Wednesday, 01 April 2015 07:13

Dynamic Group Recommendation with Modified Collaborative Filtering and Temporal Factor

Jinpeng Chen, Yu Liu and Deyi Li

School of Computer Science and Engineering, BeiHang University, China

Abstract: Group recommendation, which provides a group of users with information items, has become increasingly im-portant in both the workspace and people’s social activities. Because users change their preferences or interests over time, the dynamics and diversity of group members is a challenging problem for group recommendation. In this article, we introduce a novel group recommendation method via fusing the modified collaborative filtering methodology with the temporal factor in order to, solve the dynamics problem. Meanwhile, we also put forward a new method of eliminating sparse problem so as to improve the accuracy of recommendation. We have tested our method in the music recommendation domain. Experimental results indicate the proposed group recommender method provides better performance than an original method and gRecs. The result of efficiency and scalability test also shows our method is usable.

Keywords: Recommender systems, group recommendation, collaborative filtering, temporal factor, sparsity, dynamics.

Received August 31, 2013; accepted April 20, 2014

 

Wednesday, 01 April 2015 07:11

Texts Semantic Similarity Detection Based Graph Approach

Majid Mohebbi and Alireza Talebpour

Department of Computer Engineering, Faculty of Electrical and Computer Engineering,
Shahid Beheshti University, Iran

Abstract: Similarity of text documents is important to analyze and extract useful information from text documents and generation of the appropriate data. Several cases of lexical matching techniques offered to determine the similarity between documents that have been successful to a certain limit and these methods are failing to find the semantic similarity between two texts. Therefore, the semantic similarity approaches were suggested, such as corpus-based methods and knowledge based methods e.g., WordNet based methods. This paper, offers a new method for Paraphrase Identification (PI) in order to, measuring the semantic similarity of texts using an idea of a graph. We intend to contribute to the order of the words in sentence. We offer a graph based algorithm with specific implementation for similarity identification that makes extensive use of word similarity information extracted from WordNet. Experiments performed on the Microsoft Research Paraphrase Corpus and we show our approach achieves appropriate performance.

Keywords: WordNet, semantic similarity, similarity metric, graph theory.

Received November 17, 2013; accepted June 23, 2014

 

Wednesday, 01 April 2015 07:07

Binary Data Comparison using Similarity Indices and Principal Components Analysis

Nouhoun Kane, Khalid Aznag, Ahmed El Oirrak and Mohammed Kaddioui

Computer Science Department, Faculty of Sciences Semlalia, Cadi Ayyad University, Morocco

 

Abstract: This work is a study of binary data, especially binary images sources of information widely used. In general, comparing two binary images that represent the same content is not always easy because an image can undergo transformations like: Translation, rotation, affinity, resolution change and scale or change in appearance. In this paper, we will try to solve the translation and rotation problems. For translation case, the similarity indices are used between the image rows or blocks. In the case of rotation, first  ​​the coordinate’s contours are extracted, then we compute the covariance matrix used in the Principal Components Analysis (PCA) and the corresponding eigen values which are invariant to this type of movement. We also, compare our approach having complexity O(M+N) to Hausdorff  Distance (HD) that has complexity of O(M×N) for an M×N image. In our approach, HD is used only to compare distance between 1D signatures.

Keywords: Binary images, covariance matrix, similarity index, HD.

Received August 31, 2013; accepted April 20, 2014

 

Wednesday, 01 April 2015 07:02
Clustering with Probabilistic Topic Models on Arabic Texts: A Comparative Studyof LDA and K-means

Abdessalem Kelaiaia1 and Hayet Merouani2

1Department of Computer Sciences, University of May 08, Algeria
2 Department of Computer Sciences, University of Badji Mokhtar, Algeria


Abstract: Recently, probabilistic topic models such as Latent Dirichlet Allocation (LDA) have been widely used for applications in many text mining tasks such as retrieval, summarization and clustering on different languages. In this paper, we present a first comparative study between LDA and K-means, two well-known methods respectively in topics identification and clustering applied on arabic texts. Our aim is to compare the influence of morpho-syntactic characteristics of Arabic language on performance of first method compared to the second one. In order to, study different aspects of those methods the study is conducted on four benchmark document collections in which the quality of clustering was measured by the use of four well-known evaluation measures, Rand index, Jaccard index, F-measure and Entropy. The results consistently show that LDA perform best results more than K-means in most cases.

Keywords: Clustering, topics identification, arabic text, LDA, k-means, preprocessing.

Received November 7, 2012; accepted November 27, 2013

 

Wednesday, 01 April 2015 06:58

Patching Assignment Optimization for Security Vulnerabilities

 

Shao-Ming Tong, Chien-Cheng Huang, Feng-Yu Lin and Yeali Sun

Department of Information Management, National Taiwan University, Taiwan

 

Abstract: This research is focusing on how IT support center applies the limited resources to elaborate a vulnerability patch in face of its disclosure in a system. We propose the most optimized procedure to design the patch in question and let second-tier security engineer handle the update for vulnerabilities with patch release. While the frontline security engineer are able to provide a firewall to hold the leakage plus create and update the patch in the shortest amount of time. In face of, some system vulnerabilities, the frontline security engineer has to build up a prevention procedure before the patch is released. The strategy of this study is to focus on the transfer of patch demand to the adequate system engineer in a mathematical programming problem module. Within it the objective function is minimized to pursue the shortest amount of survival time for the vulnerability (before the patch is released), we also added some related constraints. The main contributions of this study is a non-linear non-convex mixed integer programming problem formulation for patching assignment optimization and a near optimal solution approach.

Keywords: Vulnerability, patch management, assignment algorithm, optimization, mathematical programming, near optimal solution

Received September 4, 2013; accepted June 29, 2014

Wednesday, 01 April 2015 06:55

Performance of Random Forest and SVM in Face Recognition


Emir Kremic, Abdulhamit Subasi

Faculty of Engineering and Information Technologies, International Burch University,

 Bosnia and Herzegovina

Abstract: In this study, we present the performance of Random Forest (RF) and Support Vector Machine (SVM) in facial recognition. Random Forest Tree (RFT) based algorithm is popular in computer vision and in solving the facial recognition. SVM is a machine learning method and has been used for classification of face recognition. The kernel parameters were used for optimization. The testing has been comportment from the International Burch University (IBU) image databases. Each person consists of 20 single individual photos, with different facial expression and size 205×274 px. The SVM achieved accuracy of 93.20%, but when optimized with different classifiers and kernel accuracy among all was 95.89%, 96.92%, 97.94%. RF achieved accuracy of 97.17%. The approach was as follow: Reads image, skin color detection, RGB to gray, histogram, performance of SVM, RF and classification. All research and testing which were conducted are with aim to be integrated in mobile application for face detection, where application can perform with higher accuracy and performance.

Keywords: SVM, random forest, face recognition.

Received November 3, 2013; accepted October 26, 2014

Full Text

 

 

Wednesday, 01 April 2015 06:50

Implementation of Image Processing System using Handover Technique with Map Reduce Based on Big Data in the Cloud Environment

Mehraj Ali and John Kumar

Department of Computer Application, Thiagarajar College of Engineering, India


Abstract: Cloud computing is the one of the emerging techniques to process the big data. Cloud computing is also, known as service on demand. Large set or large volume of data is known as big data. Processing big data (MRI images and DICOM images) normally takes more time. Hard tasks such as handling big data can be solved by using the concepts of hadoop. Enhancing the hadoop concept will help the user to process the large set of images. The Hadoop Distributed File System (HDFS) and MapReduce are the two default main functions which is used to enhance hadoop.  HDFS is a hadoop file storing system, which is used for storing and retrieving the data. MapReduce is the combination of two functions namely map and reduce. Map is the process of splitting the inputs and reduce is the process of integrating the output of map’s input. Recently, medical experts experienced problems like machine failure and fault tolerance while processing the result for the scanned data. A unique optimized time scheduling algorithm, called Dynamic Handover Reduce Function (DHRF) algorithm is introduced in the reduce function. Enhancement of hadoop and cloud and introduction of DHRF helps to overcome the processing risks, to get optimized result with less waiting time and reduction in error percentage of the output image.

Keywords: Cloud computing, big data, HDFS, mapreduce, DHRF algorithm.

Received September 13, 2013; accepted March 20, 2014

Full Text

 

 

 

Wednesday, 01 April 2015 06:40

Probabilistic and Fuzzy Logic based Event Processing for Effective Business Intelligence

Govindasamy Vaiyapuri1 and Thambidurai Perumal2

1Department of Computer Science and Engineering, Pondicherry Engineering College, India

  2Perunthalaivar Kamarajar Institute of Engineering and Technology, India


Abstract: This paper, focuses on Probabilistic Complex Event Processing (PCEP) in the context of real world event sources of data streams. PCEP executes complex event pattern queries on the continuously streaming probabilistic data with uncertainty. The methodology consists of two phases: Efficient generic event filtering and probabilistic event sequence prediction paradigm. In the first phase, a Non-deterministic Finite Automaton (NFA) based event matching allows to filter the relevant events by discovering the occurrences of the user defined event patterns in a large volume of continuously arriving data streams. In order to, express the complex event patterns in a more efficient form, a Complex event processing (CEP) language named as Complex Event Pattern Subscription Language (CEPSL) is developed by extending the existing high level event query languages. Furthermore, query plan-based approach is used to compile the specified event patterns into the NFA automaton and to distribute to a cluster of state machines to improve the scalability. In the second phase, an effective Dynamic Fuzzy Probabilistic Relational Model (DFPRM) is proposed to construct the probability space in the form of event hierarchy. The proposed system deploys a Probabilistic Fuzzy Logic (PFL) based inference engine to derive the composite of event sequence approximately with the reduced probability space. To determine the effectiveness of the proposed approach, a detailed performance analysis is performed using a prototype implementation.

Keywords: CEP, event filtering, NFA, uncertain events, DFPRM.

Received June 12, 2013; accepted July 28, 2013

Full Text

 

 

 

Wednesday, 01 April 2015 06:31

Mahalanobis Distance-the Ultimate Measure for Sentiment Analysis

Valarmathi Balasubramanian1, Srinivasa Nagarajan2 and Palanisamy Veerappagoundar3

1Faculty of Soft Computing Division, VIT University, India

2Faculty of Manufacturing Division, VIT University, India

3Info Institute of Engineering, India

 Abstract: In this paper, Mahalanobis Distance (MD) has been proposed as a measure to classify the sentiment expressed in a review document as either positive or negative. A new method for representing the text documents using Representative Terms (RT) has been used. The new way of representing text documents using few representative dimensions is relatively a new concept, which is successfully demonstrated in this paper. The MD based classifier performed with 70.8% of accuracy for the experiments carried out using the benchmark dataset containing 25000 movie reviews. The hybrid of Mahalanobis Distance based Classifier (MDC) and Multi Layer Perceptron (MLP) resulted in a 98.8% of classification accuracy, which is the highest ever reported accuracy for a dataset containing 25000 reviews.

 

Keywords: Sentiment analysis, MD, opinion mining, machine learning algorithms, hybrid classifier.

Received August 20, 2012; accepted September 26, 2013

Full Text

 

 


 

 

Wednesday, 01 April 2015 06:28

Spread Spectrum based Invertible Watermarking for Medical Images using RNS and Chaos

 Muhammad Naseem1, Ijaz Qureshi2, Muhammad Muzaffar 1 and Atta ur Rahman3

1School of Engineering and Applied Sciences, ISRA University, Pakistan 

2Department of Electrical Engineering, Air University, Pakistan 

3Barani Institute of Information Technology, University Rawalpindi, Pakistan

Abstract: In the current paper, we have presented a novel watermarking scheme with making watermark as robust while keeping the image fragile using Residue Number system (RNS) and Chaos. Residues of the image are made to keep it secure since, their sensitive to change is high. Only Region Of Interest (ROI) part of the image is residued. In making residues of ROI, some residues exceed bit size eight so, these residues are converted to eight bits by applying some trick. Two watermarks are embedded in two stages; one to achieve robustness using Spread Spectrum (SS) technique and other to achieve fragility of image by calculating the digest of image. In the first stage, spreaded watermark is embedded in region of non-Interest (RONI) pixels using the chaotic key and in the second stage, hash calculated from the first stage is again embedded in RONI pixels based on the chaotic key. Moreover, the original image is not needed at receiver end, which makes the proposed scheme blind.

Keywords: RNS, chinese remainder theorem (CRT), ROI, SS, RON, chaos.

Received November 17, 2013; accepted June 23, 2014

 

 
Wednesday, 01 April 2015 06:24

Human Visual Perception-based Image Quality Analyzer for Assessment of Contrast Enhancement Methods

Soong Chen

College of Information Technology, University Tenaga Nasional, Malaysia

Abstract: Absolute Mean Brightness Error (AMBE) and entropy are two popular Image Quality Analyzer (IQA) metrics used for assessment of Histogram Equalization (HE)-based contrast enhancement methods. However, recent study shows that they have poor correlation with Human Visual Perception (HVP); Pearson Correlation Coefficient (PCC)<0.4. This paper, proposed a new IQA which takes into account important properties of HVP with respect to luminance, texture and scale. evaluation results show that the proposed IQA has significantly improved performance (PCC>0.9). It outperforms all IQAs in study, including two prominent IQAs designed for assessment of image fidelity in image coding-multi-scale structural similarity and information fidelity criterion.


Keywords: IQA, visual perception, distortions, contrast enhancement.

Received May 20, 2013; accepted November 10, 2013

Full Text


 

 

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…