Wednesday, 16 January 2013 05:28

An Improved Feature Extraction and Combination of Multiple Classifiers for Query-by-Humming

Nattha Phiwma1 and Parinya Sanguansat2
 1Department of Computer Science, Suan Dusit Rajabhat University, Thailand
2Faculty of Engineering and Technology, Panyapiwat Institute of Management, Thailand

 
Abstract: In this paper, we propose new methods for feature extraction and soft majority voting to adjust efficiency and accuracy of music retrieval. For our work, the input is humming sound which is sound wave and Musical Instrument Digital Interface (MIDI) is used as the reference song in database. A critical issue of humming sound are variation such as duration, sound, tempo, key, and noise interference from both environment and acquisition instruments. Besides all the problems of humming sound we have mentioned earlier, whether humming sound and MIDI in different domain which will make the difficulty for two domains to compare each other. However, to make these two in the same domain, we convert them into the frequency domain. Our approach starts from pre-processing by using features for note segmentation by humming sound. The process consists of four steps as follows: Firstly, the MIDI is already a sequence of pitch while the pitch in humming sound is needed to extract by Subharmonic-to-Harmonic (SHR). Subsequently, the extracted pitch can be used to calculate all above attributes and then multiple classifiers are applied to classify the multiple subsets of these features. Afterwards, the subset contain the multiple attributes, Multi-Dimensional Dynamic Time Warping (MD-DTW) is used for similarity measurement. Finally, Nearest Neighbours (NN) and soft majority voting are used to obtain the retrieval results in case of equal scores. From the experiments, to achieve 100% accuracy rate at the early top-n rank in retrieving, the appropriate feature set should consist of five classifiers.


Keywords: Query-by-humming, feature extraction, majority voting, multiple classifiers, MD-DTW, SHR.
 
Received February 8, 2012; accepted May 22, 2012
  

Full Text

Sunday, 13 January 2013 05:23

A Survey: Face Recognition Techniques under Partial Occlusion

Aisha Azeem, Muhammad Sharif, Mudassar Raza, and Marryam Murtaza
Department of Computer Sciences, COMSATS Institute of Information Technology, Pakistan

 
Abstract: Systems that rely on face recognition biometric have gained great importance ever since terrorist threats imposed weakness among the implemented security systems. Other biometrics i.e., fingerprints or iris recognition is not trustworthy in such situations whereas face recognition is considered as a fine compromise. This survey illustrates different face recognition practices that laid foundations on the issue of partial occlusion dilemma where faces are disguised to cheat the security system. Occlusion refers to facade of the face image which can be due to sunglasses, hair or wrapping of facial image by scarf or other accessories. Efforts on face recognition in controlled settings have been in the picture for past several years; however identification under uncontrolled conditions like illumination, expression and partial occlusion is quite a matter of concern. Based on literature a classification is made in this paper to solve the recognition of face in the presence of partial occlusion. These methods are named as part based methods that make use of Principal Component Analysis (PCA), Linear Discriminate Analysis (LDA), Non-negative Matrix Factorization (NMF), Local Non-negative Matrix Factorization (LNMF), Independent Component Analysis (ICA) and other variations. Feature based and fractal based methods consider features around eyes, nose or mouth region to be used in the recognition phase of algorithms. Furthermore the paper details the experiments and databases used by an assortment of authors to handle the problem of occlusion and the results obtained after performing diverse set of analysis. Lastly a comparison of various techniques is shown in tabular format to give a precise overview of what different authors have already projected in this particular field


Keywords: Face recognition (FR), part based methods, feature based methods, fractal-based methods, partial occlusion, recognition rates (RR).
 
Received November 21, 2010; accepted July 28, 2011
  

Full Text

Sunday, 13 January 2013 05:20

Image Segmentation by Gaussian Mixture Models and Modified FCM Algorithm

Karim Kalti and Mohamed Ali Mahjoub
 Research Unit SAGE, Team Signals, Image and Document, National Engineering School of Sousse
University of Sousse-Tunisia

 
Abstract: The Expectation Maximization (EM) algorithm and the clustering method “Fuzzy-C-Means” (FCM) are widely used in image segmentation. However, the major drawback of these methods is their sensitivity to the noise. In this paper, we propose a variant of these methods which aim at resolving this problem. Our approaches proceed by the characterization of pixels by two features: the first one describes the intrinsic properties of the pixel and the second characterizes the neighborhood of pixel.  Then the classification is made on the base on adaptive distance which privileges the one or the other features according to the spatial position of the pixel in the image. The obtained results have shown a significant improvement of our approaches performance compared to the standard version of the EM and FCM respectively, especially regarding about the robustness face to noise and the accuracy of the edges between regions.

Keywords: EM algorithm, FCM algorithm, image segmentation, adaptive distance.
 
Received May 4, 2011; accepted December 30, 2012
  

Full Text

Sunday, 13 January 2013 05:14

The Evaluation of Spoken Dialog Management Models for Multimodal HCIs

Rytis Maskeliunas
 Kaunas University of Technology, Studentu 48A-303, Lithuania
 
Abstract: The implementation of voice dialogs enables the realization of some of the aims of modern HCI services more successfully and efficiently. Sadly the multimodal Lithuanian HCIs carried by the most natural form of communication – speech are still in the prototype stage and no services are provided to end user at the time of writing. This paper describes an experimental evaluation of the possibilities of using the spoken language dialogs as the main modality in modern application control. The recognition accuracy of the tree main types of spoken dialogues (dictation, keyword spotting, isolated utterances) was evaluated and user preference survey was done on proposed multimodal HCIs. The goal of this research was to gather the results by possible everyday future users not familiar with such systems.

Keywords: Spoken dialog, dialog management, HCI, speech recognition, multimodal interactions
 
Received June 25, 2011; accepted December 30, 2012
  

Full Text

Sunday, 13 January 2013 05:08

Brightness Preserving Image Contrast Enhancement Using Spatially Weighted Histogram Equalization

Chao Zuo, Qian Chen, Xiubao Sui, and Jianle Ren
Jiangsu Key Laboratory of Spectral Imaging & Intelligence Sense, Nanjing University of Science and Technology, China
 
Abstract: This paper presents a simple and effective method for image contrast enhancement called spatially weighted histogram equalization. Spatially weighted histogram not only considers the times of each grey value appears in a certain image, but also takes the local characteristics of each pixel into account. In the homogeneous region of an image, the spatial weights of pixels tend to zero, whereas at the edges of the image, this weights are very large. In order to maintain the mean brightness of the original image, the grey level transformation function calculated by spatial weighted histogram equalization is modified, and the final result is given by mapping the original image through this modified grey level transformation function. The experimental results show that the proposed method has better performance than the existing methods, and preserve the original brightness quite well, so that it is possible to be utilized in consumer electronic products.

Keywords: Image contrast enhancement, histogram equalization, brightness preserving enhancement, spatially weighted histogram.
 
Received July 14, 2011; accepted December 30, 2011
  

Full Text

Sunday, 13 January 2013 05:00

Constraint-Based Sequential Pattern Mining:
A Pattern Growth Algorithm Incorporating Compactness, Length and Monetary

1Bhawna Mallick, 1Deepak Garg, and 2Preetam Singh Grover
1Department of Computer Science & Engineering, Thapar University, India
2Department of Computer Science & Engineering, Guru Tegh Bahadur Institute of Technology
GGS Indraprastha University, India

 
Abstract: Sequential pattern mining is advantageous for several applications for example, it finds out the sequential purchasing behavior of majority customers from a large number of customer transactions. However, the existing researches in the field of discovering sequential patterns are based on the concept of frequency and presume that the customer purchasing behavior sequences do not fluctuate with change in time, purchasing cost and other parameters. To acclimate the sequential patterns to these changes, constraint are integrated with the traditional sequential pattern mining approach. It is possible to discover more user-centered patterns by integrating certain constraints with the sequential mining process. Thus in this paper, monetary and compactness constraints in addition to frequency and length are included in the sequential mining process for discovering pertinent sequential patterns from sequential databases. Also, a CFML-PrefixSpan algorithm is proposed by integrating these constraints with the original PrefixSpan algorithm, which allows discovering all CFML sequential patterns from the sequential database. The proposed CFML-PrefixSpan algorithm has been validated on synthetic sequential databases. The experimental results ensure that the efficacy of the sequential pattern mining process is further enhanced in view of the fact that the purchasing cost, time duration and length are integrated with the sequential pattern mining process.

Keywords: Zernike moments, building extraction, Mean Shift, SVM, VHSR satellite images.
 
Received July 15, 2011; accepted May 22, 2012
  

Full Text

Sunday, 13 January 2013 04:54

Zernike Moments and SVM for Shape Classification in Very High Resolution Satellite Images

Habib Mahi1, Hadria Fizazi Isabaten2, and Chahira Serief1
1 Earth Observation Division, Centre of Space Techniques, Algeria
2 Faculty of Computing Science, Boudiaf University, Algeria
 
Abstract: In this paper, a Zernike moments-based descriptor is used as a measure of shape information for the detection of buildings from very high spatial resolution satellite images. The proposed approach comprises three steps. First, the image is segmented into homogeneous objects based on the spectral and spatial information. Mean-Shift segmentation method is used for this end. Second, a Zernike feature vector is computed for each segment. Finally, a support vector machines-based classification using the feature vectors as inputs is performed. Experimental results and comparison with ENVI (Environment for Visualizing Images) commercial package confirm the effectiveness of the proposed approach.

Keywords: Zernike moments, building extraction, Mean Shift, SVM, VHSR satellite images.
 
Received September 23, 2011; accepted December 30, 2011
  

Full Text

Sunday, 13 January 2013 04:43

High-Availability Decentralized Cryptographic Multi-Agent Key Recovery

Kanokwan Kanyamee and Chanboon Sathitwiriyawong
 Faculty of Information Technology, King Mongkut’s Institute of Technology Ladkrabang,
Bangkok, Thailand
 
Abstract: This paper proposes two versions for the implementation of a novel high-availability decentralized cryptographic multi-agent key recovery system (HADM-KRS) that do not require a key recovery centre: HADM-KRSv1 and HADM-KRSv2. They have been enhanced from our previous work and entirely comply with the latest key recovery system in the NIST's framework. System administrators can specify the minimum number of key recovery agents (KRAs) according to security policies and requirements while maintaining compliance with legal requirements. This feature is achieved by applying the concept of secret sharing and power set to distribute the session key to participating KRAs. It uses the principle of secure session key management with an appropriate design of key recovery function. The system is designed to achieve high availability despite the failure of some KRAs. The performance evaluation results show that the proposed systems incur little processing times. They provide a security platform with good performance, fault tolerance, and robustness in terms of secrecy and availability.

Keywords: Cryptographic key management, secret sharing, key recovery, and key recovery agent.
 
Received November 5, 2011; accepted May 22, 2012
  

Full Text

Sunday, 13 January 2013 04:39

VLSI-Oriented Lossy Image Compression Approach Using DA-Based 2D-Discrete Wavelet

1Devangkumar Umakant Shah and 2Chandresh H. Vithlani
1Electronics & Communication Department, School of Engineering, R. K. University, Rajkot
2Electronics & Communication Department, Government Engineering College, Rajkot
 
Abstract: In this paper, we introduced a discrete wavelet transform based VLSI-oriented lossy image compression approach, widely used as the core of digital image compression. Here, Distributed Arithmetic (DA) technique is applied to determine the wavelet coefficients, so that the number of arithmetic operation can be reduced substantially. As well, the compression rate is enhanced with the aid of introducing RW block that blocks some of the coefficients obtained from the high pass filter to zero.  Subsequently, DPCM (differential pulse-code modulation) and Huffman-encoding are applied to acquire the binary sequence of the image. The functional simulation of each module is presented as well as the performance of each module is widely analyzed with gate required, clock cycles required, power, processing rate, and processing time. From the analysis, it is found that the DCM module requires more gates to do the transformation process compared to other modules. Eventually, the proposed compression approach is compared with the existing methods in terms of processor area and power. Comparative result shows that the proposed method offers good performance in power-efficiency corresponding to 0.328mW/chip than the prior methods.

Keywords: Image compression, Discrete Wavelet Transform, Distributed Arithmetic, Differential Pulse-Code Modulation (DPCM), Huffman-coding.
 
Received November 11, 2011; accepted May 22, 2012
  

Full Text

Sunday, 13 January 2013 04:35

A Critical Comparison for Data Sharing Approaches

ofien Gannouni, Mutaz Beraka, and Hassan Mathkour
Department of Computer Science, College of Computer and Information Sciences,
King Saud University, KSA

 
Abstract: Integrating and accessing data stored in autonomous, distributed and heterogeneous data sources have been recognized as of a great importance to small and huge-scale businesses. Enhancing the accessibility and the reusability of these data entail the development of new approaches for data sharing. These approaches should satisfy a minimal set of criteria in order to support the development of effective and comprehensive data sharing applications. In this paper, we first outline the four data sharing approaches and define a set of fundamental criteria for data sharing approach. Moreover, we investigate the motivation and importance of these criteria, and the inter-dependencies among them. Additionally, we compare the existing data sharing approaches based on the available options for each criterion.

Keywords:  Data sharing approaches, Data access, Data integration, Data consistency, Performance, Criteria options.
 
Received January 2, 2012; accepted September 23, 2012
  

Full Text

Sunday, 13 January 2013 04:30

Scalable Self-Organizing Structured P2P Information Retrieval Model Based on Equivalence Classes

Yaser A. Al-Lahham and Mohammad Hassan
Faculty of Science and Information Technology, Zarqa University, Jordan
 
Abstract: This paper proposes a new autonomous self-organizing content-based node clustering peer to peer information retrieval (P2PIR) model. This model uses incremental transitive document-to-document similarity technique to build Local Equivalence Classes (LECes) of documents on a source node. Locality Sensitive Hashing scheme is applied to map a representative of each LEC into a set of keys which will be published to hosting node(s). Similar LECes on different nodes form Universal Equivalence Classes (UECes), which indicate the connectivity between these nodes. The same LSH scheme is used to submit queries to subset of nodes that most likely have relevant information. The proposed model has been implemented. The obtained results indicate efficiency in building connectivity between similar nodes, and correctly allocate and retrieve relevant answers to high percentage of queries. The system was tested for different network sizes and proved to be scalable as efficiency downgraded gracefully as the network size grows exponentially.

Keywords: Peer-to-Peer systems, Information retrieval, Node clustering, Equivalence class, Mapping, Incremental transitivity.
 
Received February 5, 2012; accepted May 22, 2012
  

Full Text

Sunday, 13 January 2013 04:25

Automated Weed Classification with Local Pattern-Based Texture Descriptor

Faisal Ahmed1, Hasanul Kabir1, Shayla Azad Bhuyan2, Hossain Bari3, and Emam Hossain4
 1Department of CSE, Islamic University of Technology, Bangladesh
                         2Department of EEE, BRAC University, Bangladesh
3Samsung Bangladesh R & D Center Ltd, Bangladesh
4Department of CSE, Ahsanullah University of Science and Technology, Bangladesh

 
Abstract: In conventional cropping systems, removal of weed population extensively relies on the application of chemical herbicides. However, this practice should be minimized because of the adverse effects of herbicide applications on environment, human health, and other living organisms. In this context, if the distribution of broadleaf and grass weeds could be sensed locally with a machine vision system, then the selection and dosage of herbicides applications could be optimized automatically. This paper presents a simple, yet effective texture-based weed classification method using local pattern operators. The objective is to evaluate the feasibility of using micro-level texture patterns to classify weed images into broadleaf and grass categories for real-time selective herbicide applications. Three widely-used texture operators, namely local binary pattern (LBP), local ternary pattern (LTP), and local directional pattern (LDP) are considered in our study. Experiments on 400 sample field images with 200 samples from each category show that, the proposed method is capable of effectively classifying weed images and provides superior performance than several existing methods.


Keywords: Local pattern operator, machine vision system, support vector machine, template matching, weed classification.
 
Received February 21, 2012; accepted August 2, 2012
  

Full Text

Sunday, 13 January 2013 04:19

Efficient Algorithm for Contrast Enhancement
 of Natural Images

Shyam Lal1 and Mahesh Chandra2
1ECE Department, Moradabad Institute of Technology, Moradabad (U.P.), INDIA
2ECE Department, Birla Institute of Technology, Mesra, Ranchi (J.H.), INDIA

 
Abstract: This paper proposed an efficient algorithm for contrast enhancement of natural images. The contrast of image is very important characteristics by which the quality of image can be judged as good or poor quality. The proposed algorithm is consists of two stages: In first stage the poor quality of image is process by modified sigmoid function  and in second stage the output of first stage is further process by contrast limited adaptive histogram equalization to enhance contrast of image. In order to achieve better contrast enhancement of image a novel mask based on input value together with the modified sigmoid formula that will be used as contrast enhancer in addition to contrast limited adaptive histogram equalization. This new contrast enhancement algorithm passes over the input image which operates on its pixels one by one in spatial domain. Simulation & experimental results on benchmark test images demonstrates that proposed algorithm provides better results as compared to other state-of-art contrast enhancement techniques. Proposed algorithm performs efficiently in different dark and bright images by adjusting their contrast very frequently. Proposed algorithm is very simple and efficient approach for contrast enhancement of image. This algorithm can be used in various applications where images are suffering from different contrast problems.

Keywords: Contrast enhancement, Modified sigmoid function, Image processing, Histogram equalization.
 
Received April 21, 2012; accepted August 10, 2012
  

Full Text

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…