Sunday, 08 March 2015 14:36
 Smelling the Web Olfactory Display for Web Objects

 Saad Abid2, Zhiyong Li1 and Renfa Li1

1College of Information Science and Engineering, Hunan University, China

2Department of Computer Science and Informatics, Al-Mansour University College, Iraq

Abstract: Internet technology has gone a very far in term of advances and improvements, flash players, video components and other multimedia support are incorporated into the recent web pages. Also services provided such as clouding, storage system, online banking and e-commerce are very common and used on daily basis, still there are many missing components regarding more human interaction with the web page, rather than just seeing and clicking. In this paper, we implemented a system that’s capable of identifying mouse location onto a web page and capturing that component (mostly images) extracting its meta-data and Exchangeable Image File Format (EXIF) information, this information is processed further by a basic natural language processing subsystem providing it with text parsing results after tokenizing the string, this is to come with a single conclusion: What dose this image represents. The result is normally a single descriptive word corresponds to that image sent to a micro-controller to be analyzed through a table with corresponding values essentially a set of pulses and signals data, to display it as a smell corresponds to the item under the mouse by applying the pulses to the atomizer to give the user the smell of that object. We found that the system has high successful identification ratio over websites with fairly accurate image identification ratio.

 Keywords: Olfactory displays, web-smell, web object, scent, atomization.

Received August 26, 2014; accepted March 19, 2014

Full Text 

 


Wednesday, 03 December 2014 14:47

Improvement in Rebalanced CRT RSA

Seema Verma and Deepak Garg

Department of Computer Science and Engineering, Thapar University, India

Abstract: Many improvements have been made since the RSA origin in terms of encryption/decryption speed and memory saving. This paper concentrates on the performance improvement. Rebalanced RSA is designed to improve the decryption speed at the cost of encryption speed. Further work was done to improve its encryption speed in terms of rebalanced Chinese Remainder Theorem (CRT) variants. rebalanced CRT variants improved the encryption speed at the cost of decryption speed.  This paper also improves the performance of the encryption side in rebalanced RSA, while still maintaining the same decryption speed as in Rebalanced RSA, by adding the MultiPrime RSA feature to the Rebalanced CRT variant. Proposed scheme gains the same advantage in encryption side as in rebalanced CRT variants, besides it is 2 times faster at decryption side than rebalanced CRT variants. Due to the use of MultiPrime feature, the key generation time is also decreased in this case. It is decreased approximately by a factor of 2.39 from rebalanced RSA CRT variant. Comparison of the RSA variants with the new scheme is shown in tabular and graphical way for better analysis.

Keywords: Cryptography, computational complexity, encryption, public key.

Received Septemper 6, 2012; Accepted April 18, 2013

Full Text

Wednesday, 03 December 2014 14:43

An Integrated Approach for Measuring Semantic Similarity between Words and Sentences using Web Search Engine

 Kavitha A

Manonmaniam Sundaranor University, India

Abstract: Semantic similarity measures play vital roles in Information Retrieval (IR) and Natural Language Processing. Despite the usefulness of semantic similarity measures in various applications, strongly measuring semantic similarity between two words remains a challenging task. Here, three semantic similarity measures have been proposed, that uses the information available on the web to measure similarity between words and sentences. The proposed method exploits page counts and text snippets returned by a web search engine. We develop indirect associations of words, in addition to direct for estimating their similarity. Evaluation results on different data sets shows that our methods outperform several competing methods.

 Key words: Semantic similarity, web search engine, higher order association mining, support vector machine.

Received October 29, 2012; Accepted February 27, 2013

Full Text

 

Wednesday, 03 December 2014 14:39

Model Based Approach for Content Based Image Retrievals Based on Fusion and Relevancy Methodology

 T.V. Madhusudhana Rao1, S.Pallam Setty2 and Y.Srinivas3

1Department of Computer Science and Engineering, Thandra Paparaya Institute of Science and Technology, India

2 Department of Computer Science and Systems Engineering, Andhra University, India

3Department of Information Technology, GITAM University, India

Abstract: This paper proposes a methodology for Content Based Image Retrievals (CBIR) using the concept of Fusion and Relevancy mechanism based on K-L Divergence associated with Generalized Gamma Distribution to integrate the features corresponding to multiple modalities, feature level fusion technique is considered. The relevancy approach considered bridges the link to both high level and low level features. The target in the CBIR is to retrieve the images of relevancy based on the query and retrieving the most relevant images optimizing the time complexity. A Generalized Gamma Distribution is considered in this paper to model the parameters of the query image and basing on the maximum likelihood estimation the Generalized Gamma Distribution, the most relevant images are retrieved. The parameters of the Generalized Gamma Distribution are updated using the EM algorithm. The developed model is tested on the brain images considered from brain web data of UCI database. The performance of the model is evaluated using Precision and Recall.

Keywords: CBIR, generalized gamma distribution, relevance image, query image, EM algorithm, precision and recall.

Received May 23, 2014; Accepted October 2, 2013

Full Text 

 

 

Wednesday, 03 December 2014 14:34

A Gene-Regulated Nested Neural Network

Romi Rahmat1, Muhammad Pasha2, Mohammad Syukur3 and Rahmat Budiarto4

 

1Fakultas Ilmu Komputer dan Teknologi Informasi, Universitas Sumatera Utara, Indonesia
2School of Computer Sciences, Universiti Sains Malaysia, Malaysia
3Fakultas Matematika and Ilmu Pengetahuan Alam, Universitas Sumatera Utara, Indonesia
4College of Computer Science and Information Technology, Albaha University, Saudi Arabia

 


Abstract: Neural networks have always been a popular approach for intelligent machine development and knowledge discovery. Although, reports have featured successful neural network implementations, problems still exists with this approach, particularly its excessive training time. In this paper, we propose a Gene-Regulated Nested Neural Network (GRNNN) model as an improvement to existing neural network models to solve the excessive training time problem. We use a gene regulatory training engine to control and distribute the genes that regulate the proposed nested neural network. The proposed GRNNN is evaluated and validated through experiments to classify accurately the 8 bit XOR parity problem. Experimental results show that the proposed model does not require excessive training time and meets the required objectives.

Keywords: Neural networks, gene regulatory network, artificial intelligence, bio-inspired computing.

Received May 13, 2013; accepted July 21, 2013

Full Text

Wednesday, 03 December 2014 13:29

A Safe Exit Approach for Continuous Monitoring of Reverse k Nearest Neighbors in Road Networks

Muhammad Attique, Yared Hailu, Sololia GudetaAyele, Hyung-Ju Cho and Tae-Sun Chung

Department of Computer Engineering, Ajou University, South Korea

Abstract: Reverse k Nearest Neighbor (RKNN) queries in road networks have been studied extensively in recent years. However, at present, there is still a lack of algorithms for moving queries in a road network. In this paper, we study how to efficiently process moving queries. Existing algorithms do not efficiently handle query movement. For instance, whenever a query changes its location, the result of the query has to be recomputed. To avoid this recomputation, we introduce a new technique that can efficiently compute the safe exit points for continuous reverse k nearest neighbors. Within these safe exit points, the query result remains unchanged and a request for recomputation of the query does not have to be made to the server. This significantly reduces server processing costs and the communication costs between the server and moving clients. The results of extensive experiments conducted using real road network data indicate that our proposed algorithm significantly reduces communication and computation costs.

Keywords: continuous monitoring, reverse nearest neighbor query, safe exit algorithm, road network

Received April 29, 2013; accepted  July 11, 2013Full Text

Full Text

 

 

Wednesday, 03 December 2014 13:15

Privacy-Preserving Data Mining in Homogeneous Collaborative Clustering

Mohamed Ouda, Sameh Salem, Ihab Ali and El-Sayed Saad

Department of Communication Electronics and Computer Engineering, Helwan University, Egypt

 Abstract: Privacy concern has become an important issue in data mining. In this paper, a novel algorithm for privacy preserving in distributed environment using data clustering algorithm has been proposed. As demonstrated, the data is locally clustered and the encrypted aggregated information is transferred to the master site. This aggregated information consists of centroids of clusters along with their sizes. On the basis of this local information, global centroids are reconstructed then it is transferred to all sites for updating their local centroids. Additionally, the proposed algorithm is integrated with Elliptic Curve Cryptography (ECC) public key cryptosystem and Diffie-Hellman Key Exchange. The proposed distributed encrypted scheme can add an increase not more than 15% in performance time relative to distributed non encrypted scheme but give not less than 48% reduction in performance time relative to centralized scheme with the same size of dataset. Theoretical and experimental analysis illustrates that the proposed algorithm can effectively solve privacy preserving problem of clustering mining over distributed data and achieve the privacy-preserving aim.

 Keywords: Privacy-preserving; secure multi-party computation; k-means clustering algorithm.

Received December 20, 2013; accepted April 4, 2013

Full Text

 

Wednesday, 03 December 2014 13:11

A New Algorithm for Finding Vertex-Disjoint Paths

Mehmet Kurt1, Murat Berberler2 and Onur Ugurlu3
1Department of Mathematics and Computer, Izmir University, Turkey
2Department of Computer Science, Dokuz Eylul University, Turkey
3Department of Mathematics, Ege University, Turkey

 Abstract: The fact that the demands which could be labelled as “luxurious” in the past times, have became requirements makes it inevitable that the service providers do new researches and prepare alternative plans under harsh competition conditions. In order to provide the customers with the services in terms of the committed standards by taking the possible damages on wired and wireless networks into consideration. Finding vertex disjoint paths gives many advantages on the wired or wireless communication especially on Ad-Hoc Networks. In this paper, we suggest a new algorithm that calculates alternative routes which do not contain common vertex (vertex-disjoint path) with problematic route during a point-to-point communication on the network in a short time and it is compared to similar algorithms.

 Keywords: Vertex-disjoint paths, multipath, ad-hoc wireless networks.

 

Received September 2, 2013; accepted March 29, 2014

Full Text

 

Wednesday, 03 December 2014 13:07

A Differential Geometry Perspective about
Multiple Data Streams Preprocessing

Li Wen-Ping1,2, Yang Jing1 and Zhang Jian-Pei1

1College of Computer Science and Technology, Harbin Engineering University, China

2College of Mathematics Physics and Information Engineering, Jiaxing University, China

 Abstract: In the Multiple Data Streams (MDS) environment, data sources generate data with no end in sight. Because of the difference of data sources, transaction numbers of MDS are not always equal to each other during a same period. Preprocessing MDS to obtain same number of samples for each stream is an essential step for lots of mining tasks. All existing preprocessing methods assume that data arrive simultaneously. However, this assumption may not be true in many real environments due to multiple data sources and different ways of data generating. This asynchronous issue is explored in this paper by introducing the differential geometry as a trick. First, we establish a novel stream model called POLAR. The POLAR is an intrinsic surface spanned by time, probability and value. And then, we propose a preprocessing approach, called COPOLAR, to obtain same number of samples for each stream of MDS. COPOLAR first projects original observations onto POLAR; and then merges points with shortest geodesic distances along a geodesic on surface into mid-point on the same geodesic iteratively and incrementally until the number of points which we hope to obtain is met. Experimental results on synthetic and real data show that COPOLAR is effective in terms of maintaining characteristics of both statistics and vector.

 Keywords: Data mining, MDS, data preprocessing, data stream model, differential geometry, geodesic. 

Received May 18, 2013; accepted March 19, 2014

Full Text

 

 

Wednesday, 03 December 2014 12:54

Exploring the Potential of Schemes in Building NLP Tools for Arabic Language

Mohamed Ben Mohamed, Souheyl Mallat, Mohamed Nahdi and Mounir Zrigui

LaTICE Laboratory, Faculty of Sciences of Monastir, Tunisia

 

Abstract: Arabic is known for its sparseness, which explains the difficulty of its automatic processing. The arabic language is based on schemes; lemmas are produced using derivation based on roots and schemes. This latter character presents two major advantages: First, this “hidden side” of the arabic language composed of schemes suffers much less from sparseness since it represents a finite set, second, schemes keep a large number of features of the language in a much reduced vocabulary size. Schemes present a very great perspective and have great potential in building accurate natural language processing tools for arabic. In this work we tried to explore this potential by building some NLP tools while relying entirely on schemes. The work is related to text classification and a Probabilistic Context Free Grammar (PCFG) parsing.

 Keywords: Arabic language, schemes, roots, derivation, text classification, PCFG, parsing

Received August 18, 2013; accepted May 10, 2014

 

Full Text

 

Wednesday, 03 December 2014 12:32

Developing a Novel Approach for Content Based Image Retrieval Using Modified Local Binary Patterns and Morphological Transform
 

Farshad Tajeripour, Mohammad Saberi and Shervan Fekri-Ershad  

 Department of Computer Science, Engineering and IT, Shiraz University, Iran

 Abstract: Digital image retrieval is one of the major concepts in image processing. In this paper, a novel approach is proposed to retrieve digital images from huge databases which using texture analysis techniques to extract discriminant features together with color and shape features. The proposed approach consist three steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop main object parts of the image, especially complex ones. Second step is included a texture feature representation algorithm which used color local binary patterns and local variance as discriminant operators. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. In order to, decrease the computational complexity, a novel algorithm is prepared disregarding not similar categories to the query image. It is done using log-likelihood ratio as non-similarity measure and threshold tuning technique. The performance of the proposed approach is evaluated applying on Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.    

Keywords: Image retrieval, texture analysis, local binary pattern, top-hat transform, log likelihood

 

Received August 16, 2013; accepted July 28, 2014 

 Full Text

 

 

 

Wednesday, 03 December 2014 12:26

A Computerized System for Detection of Spiculated Margins based on Mammography

 Qaisar Abbas1, Irene Fondo´n 2 and Emre Celebi3

1College of Computer and Information Sciences, Al Imam Mohammad Ibn Saud Islamic University, Saudi Arabia

2Department of Signal Theory and Communications, School of Engineering Path of Discovery, Spain

3Department of Computer Science, Louisiana State University, USA

 Abstract: Spiculated margins indicate a high risk of malignancy for breast cancer. Detection accuracy of current computerized diagnostic systems Computer-Aided Detections (CADs) for spiculated margins is not high due to the existence of intensity heterogeneities, often subtle and varied in appearance. This paper presents an automatic system for Accurately Detection Of Spiculated Margins (ADSM) by measuring its physical properties. In proposed system, a pre-processing step is performed to suppress background noise and enhance contrast. Spiculated margins are then segmented by a Maximum Fuzzy Entropy Partitioning (MFEP) algorithm whose parameters are optimized using the Quantum Genetic Algorithm (QGA). Afterwards, the characterization of spicule regions is completed using morphological operators, Steerable-Ridge-Filtering (SRF) and quantification of physical properties. A data set of 220 mammogram masses was used to evaluate the proposed system. Experimental results indicate that the ADSM system achieves a high accuracy level of Area Under the receiver operating characteristics Curve (AUC): 0.875 compared to state-of-art systems. By integrating the ADSM system, the performance of CADs could potentially be improved.

 Keywords: CAD, spiculated mass segmentation, image enhancement, fuzzy entropy, QGA, SRF.

Received May 13, 2013; accepted July 21, 2013

 

 

Thursday, 04 September 2014 07:41

Adapted Normalized Graph Cut Segmentation with Boundary Fuzzy Classifier Object Indexing On Road Satellite Images

Singaravelu Prabhu1 and  Dharmaraj Tensing2
1Department of Computer Science and Engineering , Kalaivani College of Technology, India
2 School Of Civil Engineering, Karunya University, India  

Abstract: Image segmentation is an essential component of the remote sensing, image inspection, classification and pattern identification. The road satellite image categorization points a momentous tool for the assessment of images. In the present work, the researchers have evaluated the computer vision techniques for instance segmentation, and knowledge based techniques for categorization of high-resolution descriptions. For sorting of the road satellite images, the technique named Adapted normalized Graph cut Segmentation with Boundary Fuzzy classifier object Indexing (AGSBFI) is introduced. Initially, the road satellite image is segmented to have inverse determination of shapes using adapted normalized graph cut segmentation method. The features of the segmented area are extracted and then classification of unknown boundary is carried out using boundary fuzzy classifier. Finally the classified images are then recognized based on the location using the arbitrary object indexing scheme. Performance of AGSBFI technique is measured in terms of classification efficiency and objects recognition accuracy with better results. AGSBFI considers the problem of inverse determination of unknown shape, boundary and location in the existing method. An analytical and empirical result shows the better object recognition accuracy with inverse determination of shape, boundary and location of road satellite images.

Keywords: Image segmentation, boundary fuzzy classifier, adapted normalized graph cuts, arbitrary object indexing, categorization, road satellite images.

 Received April 8, 2013; accepted  June 25, 2013

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…