Tuesday, 22 March 2016 04:04

Automatic Medical Image Segmentation Based On Finite Skew Gaussian Mixture Model

Nagesh Vadaparthi1, Srinivas Y2, SureshVarma P3, Sitharama Raju P4

1Department of I.T, MVGR College of Engineering, Vizianagaram, India.

2Department of I.T., GIT, GITAM University, Visakhapatnam. India.

3Dept. of Computer Science, Adikavi Nannayya University, Rajahmundry, India

4Department of CSE, MVGR College of Engineering, Vizianagaram, India.

 Abstract: A novel methodology for segmenting the brain MRI images using the finite skew Gaussian mixture model has been proposed for improving the effectiveness of the segmentation process. This model includes Gaussian mixture model as a limiting case and we believe does more effective segmentation of both symmetric and asymmetric nature of brain tissues as compared to the existing models. The segmentation is carried out by identifying the initial parameters and utilizing the EM algorithm for fine tuning the parameters. For effective segmentation, hierarchical clustering technique is utilized. The proposed model has been evaluated on the brain images extracted from the brainweb image database using8sub-images of 2 brain images. The segmentation evaluation is carried out using objective evaluation criterion viz. Jacquard Coefficient (JC) and Volumetric Similarity (VS). The performance evaluation of reconstructed images is carried out using image quality metrics. The experimentation is carried out using T1 weighted images and the results are presented. We infer from the results that the proposed model achieves good segmentation results when used in brain image processing.

Keywords: Segmentation, Skew Gaussian Mixture Model, Objective Evaluation, Image Quality Metrics, EM algorithm.

Full Text



Wednesday, 17 February 2016 02:05

Hierarchical Based Group Key Transfer for Secure Group Communication

 

Kanimozhi Sakthivel1, Velumadhava Rajasekaran2, Selvamani Kadirvelu2, Kannan Arputharaj2

1Ramco Systems, India

2Department of Computer Science and Engineering, Anna University, India

 

Abstract: In this research paper, a scalable and efficient multicast secure protocol for large and dynamic systems that relies on trusted Key Generation Center (KGC) is proposed. In this protocol, we partition the entire group into several subgroups which is controlled by Intermediate Controller (IC). Our protocol is based on Iolus protocol and the hierarchical structure of

LKH protocol. The decomposition of subgroup is organized in an hierarchical manner as it is in the LKH protocol which helps

us to reduce the complexity for a member to join or leave from O(n) to O(log m), where n represents the number of entire group members and m represents the number of each subgroup members. Out protocol performance is compared with that of

Iolus and LKH protocol. The performance is enhanced especially when there is a member leave operation.

Keywords: Group communication, decentralized protocols, distributed protocols, logical key hierarchy and secure multicast.

Received April 6, 2012; accepted March 13, 2013

Full Text

 


Tuesday, 01 December 2015 05:22

An Efficient Approach for Mining Frequent Itemsets with Transaction Deletion Operation

 

Bay Vo1, 2, Thien-Phuong Le3, Tzung-Pei Hong4, Bac Le5, Jason Jung6

1Division of Data Science, Ton Duc Thang University, Vietnam

2Faculty of Information Technology, Ton Duc Thang University, Vietnam

3Faculty of Technology, Pacific Ocean University, Vietnam

4Department of Computer Science and Information Engineering,

National University of Kaohsiung, Taiwan

5Department of Computer Science, University of Science, Vietnam

6Department of Computer Engineering, Chung-Ang University, Republic of Korea

Abstract: Deletion of transactions in databases is common in real-world applications. Developing an efficient and effective mining algorithm to maintain discovered information is thus quite important in data mining fields. A lot of algorithms have been proposed in recent years, and the best of them is the pre-large-tree-based algorithm. However, this algorithm only rebuilds the final pre-large tree every deleted transactions. After that, the FP-growth algorithm is applied for mining all frequent itemsets. The pre-large-tree-based approach requires twice the computation time needed for a single procedure. In this paper, we present an incremental mining algorithm to solve above issues. An itemset tidset-tree structure will be used to maintain large and pre-lagre itemsets. The proposed algorithm only processes deleted transactions for updating some nodes in this tree, and all frequent itemsets are directly derived from the tree traversal process. Experimental results show that the proposed algorithm has good performance.

 

Keywords: Data mining, frequent itemsets, incremental mining, pre-large itemsets, itemset-tidset tree.

 

Received November 1, 2012; accepted June 19, 2013

Full Text

 

Tuesday, 01 December 2015 05:06

Orthophoto Information System In Turkey In the View Of Spatial Data Infrastructure

Guler Yalcin

Department of Geomatic Engineering, Osmaniye Korkut Ata University, Turkey

Abstract: Spatial technologies are evolving quickly, particularly with regard to land related data. The design of land information system needs to be efficiently comprehensive to take these into account and manage them through a Spatial Data Infrastructure (SDI). The most effective management with the technological trends is likely to lie in spatial enablement of the various sets of information. SDIs aim to facilitate and coordinate the sharing of spatial data between stakeholders, based on a dynamic and multi-hierarchical concept that encompasses the policies, organizational remits, data, technologies, standards, delivery mechanisms and financial and human resources necessary to ensure that those working at appropriate (global, regional, national, local) scale. Satellite images and/or aerial photographs, which are one of the indispensable layers of spatial information systems, are quite important in the context of National Spatial Data Infrastructure (NSDI). This study overviews the photogrammetry, orthophoto production studies and Orthophoto Information System (OIS) in Turkey within the scope of SDI.

Keywords: SDI, photogrammetry, orthophoto, map production, interoperability.

Received January 8, 2014; accepted December 23, 2014

Full Text

 


Tuesday, 01 December 2015 05:02

Region Adaptive Robust Watermarking Scheme Based on Homogeneity Analysis

Priyanka Singh and Suneeta Agarwal

Motilal Nehru National Institute of Technology, India

Abstract: To counter the security breaches, came the need of watermarking which is one of the efficient methods to maintain the integrity of the digital content and prove the rightful ownership. Region adaptive watermarking is the technique which is based on the content of the image which is required to be protected against the various possible attacks. Homogeneity analysis of the image has been made using the quad tree based image segmentation method to chalk out the appropriate sites for embedding the secret information. The information is itself extracted from the image in terms of a feature which is hidden using the Singular Value Decomposition (SVD) properties in the cover image. The robustness of the proposed algorithm against the various attacks has been validated by attaining high Peak to Signal Noise Ratio (PSNR) and Normalized Cross Correlation (NCC) values in the experiments carried out.

Keywords: Homogeneity analysis, NCC, PSNR, quad tree, region adaptive watermarking, SVD.

 

Received January 16, 2013; accepted August 29, 2013

Full Text

 

 

Sunday, 29 November 2015 07:33

Solving Flow Allocation Problems and Optimizing System Reliability of Multisource Multisink Stochastic Flow Network

 

Moatamad Hassan

Department of Mathematics, Aswan University, Egypt

 

Abstract: Flow allocation problem is one of the important steps in reliability evaluation or optimization of a stochastic flow network. In a single source single sink networks it is easy to determine the flow on each path by using one of best known methods. While, in the case of multisource multisink flow network the flow allocation problem becomes more complicated and few studies have dealt with it. This paper investigates the flow allocation problem of multisource multisink stochastic-flow network with assuming that there are several sorts of resource flows transmitting through that network with unreliable nodes. The mathematical formulation of the problem is modified to increase the efficiency of obtaining optimal solutions that satisfy all constraints. A Genetic Algorithm (GA) is proposed to solve the flow allocation problem in existing multisource multisink networks such that the reliability of the system capacity vector is maximized. The results obtained for test cases are compared with other proposed methods to show that the proposed algorithm is efficient in obtaining optimal solutions that satisfy all constraints, and it achieves a maximum value of reliability of the system capacity vector. Finally, the proposed GA has employed to optimize the system reliability of multisource multisink stochastic flow networks.

 

Keywords: Flow allocation problem, stochastic-flow network, GA.

 

Received August 26, 2013; accepted February 4, 2015

Full Text

 


Sunday, 29 November 2015 07:31

Hybrid SVM/HMM Model for the Arab Phonemes Recognition

Elyes Zarrouk and Yassine Benayed

Multimedia Information System and Advanced Computing Laboratory, Sfax University, Tunisia

 

Abstract: Hidden Markov Models (HMM) are currently widely used in Automatic Speech Recognition (ASR) as being the most effective models. Yet, they sometimes pose some problems of discrimination. The hybridization of Artificial Neural Networks(ANN) in particular Multi Layer Perceptrons (MLP) with HMM is a promising technique to overcome these limitations. In order to ameliorate results of recognition system, we use Support Vector Machines (SVM) witch characterized by a high predictive power and discrimination. The incorporation of SVM with HMM brings into existence of the new system of ASR. So,

by using 2800 occurrences of Arabic phonemes, this work arises a comparative study of our acknowledgment system of it as the following: The use of especially the HMM standards lead to a recognition rate of 66.98%. Also, with the hybrid system MLP/HMM we succeed in achieving the value of 73.78%. Moreover, our proposed system SVM/HMM realizes the best performances, whereby, we achieve 75.8% as a recognition frequency.

 

Keywords: ASR, Hybrid System, HMM, MLP, SVM.

 Received July 24, 2012; accepted September 27, 2012

Full Text

 

 


Wednesday, 28 October 2015 07:16

Iris Recognition Using localized Zernike’s Feature and SVM

Alireza Pirasteh1, Keivan Maghooli2, Seyed Farhood Mousavizadeh1

1Semnan Branch, Islamic Azad University, Iran

2Department of Biomedical Engineering, Islamic Azad University, Iran

Abstract: Iris recognition is an approach that identifies people based on unique patterns within the region surrounding the pupil of the eye. Rotation, scale and translation invariant, are very important in image recognition. Some approaches of rotation invariant features have been introduced. Zernike Moments (ZMs) are the most widely used family of orthogonal moments due to their extra property of being invariant to an arbitrary rotation of the images. These moment invariants have been successfully used in the pattern recognition. For designing a high accuracy recognition system, a new and accurate way for feature extraction is inevitable. In order to have an accurate algorithm, after image segmentation, ZMs were used for feature extraction. After feature extraction, a classifier is needed; Support Vector Machine (SVM) can serve as a good classifier. For the N-class problem in iris classification, SVM applies N two-class machines. Indeed, in this type of validation, data are divided into K subsets. At any given moment, one is for testing and the other one is exclusively for validation. This method is called K-fold cross validation (Leave one out) and each subset is considered as an original series. Simulation stage was accomplished with IIT database and the comparison between of this method and some other methods, shows a high recognition rate of 98.61% on this database.

Keywords: Biometrics, iris recognition, zernike, K-Fold, SVM.

Received March 14, 2014; accepted February 10, 2015

 

Wednesday, 28 October 2015 07:16

Design and Construction of Secure Digital Will System

Tzong-Sun Wu, Yih-Sen Chen, and Han-Yu Lin

Department of Computer Science and Engineering, National Taiwan Ocean University, Taiwan

Abstract: The rapid growth of network communications has made applications of electronic commerce the future trend of commercial activities. To ensure the effectiveness of transactions, governments put the electronic signature law into practice. They also proposed guiding or assisting projects to encourage or help enterprises and organizations to construct secure electronic environments. Because of the availability and ease of use of information infrastructure, human’s daily life is increasingly digitalized. With the promotion of life education and religious groups, people have gradually accepted the concept of writing down wills before their death, which can reduce conflicts and arguments for inheritance. A legitimate will can be produced with the witness of notaries in the court or the notary public offices. If a testator wishes to modify his will, he must perform again the procedures above. Since, it requires some fees as well as transportation costs for notaries and witnesses, the generation of a traditional will is rather costly. It might also be inefficient for its complicated witnessing procedures. Currently, the electronic Wills Act is under discussions, but not put into practice yet. To date, there are no literatures discussing the issues of digital will. The properties of security, convenience and effectiveness are the most significant reasons why people would like to adopt the digital will mechanism in the future. Based on the mechanisms of public key infrastructure and key escrow systems, we constructed a secure and realistic model for a digital will system which fulfills the above-mentioned properties and is suitable for practical implementation.

Keywords: Will system, key escrow, testamentary trust.

Received February 20, 2014; accepted September 9, 2014

Wednesday, 28 October 2015 07:15

Ontology-Based System for Conceptual Data Model Evaluation

Zoltan Kazi1, Ljubica Kazi1, Biljana Radulovic1, and Madhusudan Bhatt2

1Technical Faculty “Mihajlo Pupin” Zrenjanin, University of Novi Sad, Serbia

2Department of Computer Science, University of Mumbai, India

Abstract: Conceptual data modelling is one of the critical phases in Information System (IS) development. In this paper we show the method, software tool and results on automating evaluation of Conceptual Data Model (CDM) from a semantic perspective. The approach is based on mapping ontology with conceptual data model. An ontology that represents domain knowledge and data model are transformed into PROLOG language clauses form and integrated with reasoning rules into the single PROLOG program. The formalization of ontology and the data model is automated within a software tool. Special metrics are defined in aim to enable calculation of semantic marks for data models. Empirical study shows the results of using this tool.

Keywords: CDM, evaluation, ontology, formalization, software tool.

Received February 17, 2014; accepted April 2, 2015

Wednesday, 28 October 2015 07:15

Identity Based Broadcast Encryption with Group of Prime Order

Yang Ming1 and Yumin Wang2

1School of Information Engineering, Chang’an University, China

2State Key Lab of Integrated Service Network, Xidian University, China

Abstract: Identity Based Broadcast Encryption (IBBE) is a cryptographic primitive, which allows a center to transmit encrypted data over a broadcast channel to a large number of users such that only a select subset of privileged users can decrypt it. In this paper, based on bilinear groups, we propose a secure IBBE scheme with a constant-size system parameters, private keys and ciphertexts. This construction uses dual pairing vector space technique in prime order groups, which can simulate the canceling and parameter hiding properties of composite order groups. Furthermore, we show that the proposed scheme utilizes a nested dual system encryption argument to prove full secure (adaptive secure) under the Decisional Linear assumption (DLIN) (static, non q-based) in the standard model. To the best of our knowledge, our scheme is the first provably secure IBBE scheme in the literature to achieve this security level.

Keywords: Cryptography, encryption, IBBE, dual pairing vector space, full security, provable security.

 

Received January 8, 2014; accepted September 9, 2014

Wednesday, 28 October 2015 07:14

An Instance Communication Channel Key Organizer Model for Enhancing Security

in Cloud Computing

Thomas Brindha and Ramaswamy Swarnammal Shaji

Department of Information Technology, Noorul Islam University, India

Abstract: Cloud computing has evolved as one of the next generation structural design of the IT endeavors in the recent years. It enables to function like the Internet through the process of accessing and sharing computing resources as virtual resources in a protected and scalable way, thereby obtaining huge influence in corporate data centers. Even as remotely hosted, governed services have long been a component of the IT landscape, an enhanced awareness in cloud computing has been powered by ubiquitous networks, maturing standards, the increase of hardware and software virtualiza­tion, and the drive to bring about IT costs flexible and obvious. Several schemes have been provided in cloud computing for enhancing the security, however most of them sustain from leakage of data. This leads to a greater issue while sharing the data in the cloud computing environment. In order to realize the leakage of data in the cloud, we address the construction of an efficient technique, Instance Communication Channel Key Organizer (ICCKO) model to support the security and data confidentiality in a secure manner. We formally prove the data security of ICCKO model based on client and server and analyze its performance and computational complexity. Performance of ICCKO model is evaluated in terms of communication overhead, log creation time, data transfer rate and channel disturbance level for sharing the data securely. Extensive security and performance analysis shows that the proposed scheme is tremendously efficient and provides with provable secure data sharing.

Keywords: Cloud computing, data sharing, security, key organizer, communication channel, data management.

 Received January 4, 2014; accepted May 10, 2014

Wednesday, 28 October 2015 07:13

Using the Ant Colony Algorithm for Real-Time Automatic Route of School Buses

Tuncay Yigit and Ozkan Unsal

Department of Computer Engineering, Süleyman Demirel University, Turkey

Abstract: Transportation and distribution systems are improving with an increasing pace with the help of current technological facilities and additionally, the complexity of those systems are increasing. Vehicle routing problems are difficult to solve with conventional techniques. Improving routes used in distribution systems provides significant savings in terms of time and costs. In this paper, current routes in school buses, which is a sub-branch of vehicle routing problems, are optimized using the Ant Colony Optimization (ACO), which is a heuristic artificial intelligence algorithm. Developed software is used for recommending the most suitable and the shortest route illustrated on a map by taking the instantaneous student wait locations online. Results of this study suggest that the current routes can be improved by using the ACO.

Keywords: ACO, school bus routing, vehicle routing problems, mobile software.

Received December 30, 2013; accepted December 23, 2014

Wednesday, 28 October 2015 07:12

New Prototype of Hybrid 3D-Biometric Facial Recognition System

Haitham Issa1, Sali Issa2, and Mohammad Issa3

1Department of Information Technology, Rustaq College of Applied Sciences, Oman

2Department of Computer Engineering, University of Applied Science, Jordan

3Department of Electronics and Information Engineering, Huazhong University of Science and Technology, China

Abstract: In the last decades, a lot of 3D face recognition techniques have been proposed. They can be divided into three parts, holistic matching techniques, feature-based techniques and hybrid techniques. In this paper, a hybrid technique is used, where, a prototype of a new hybrid face recognition technique depends on 3D face scan images are designed, simulated and implemented. Some geometric rules are used for analyzing and mapping the face. Image processing is used to get the two-dimensional values of predetermined and specific facial points, software programming is used to perform a three-dimensional coordinates of the predetermined points and to calculate several geometric parameter ratios and relations. Neural network technique is used for processing the calculated geometric parameters and then performing facial recognition. The new design is not affected by variant pose, illumination and expression and has high accurate level compared with the 2D analysis. Moreover, the proposed algorithm is of higher performance than latest’s published biometric recognition algorithms in terms of cost, confidentiality of results, and availability of design tools.

Keywords: Image processing, face recognition, probabilistic neural network, photo modeler software.

 Received October 30, 2013; accepted November 20, 2014

Wednesday, 28 October 2015 07:11

A Combined Approach for Stereoscopic 3D Reconstruction Model based on Improved
Semi Global Matching

Rajeshkannan Sundararajan and Reeba Korah

Department of Electronics Communication and Engineering, St. Joseph’s College of Engineering, India

Abstract: The effective recovery of the 3D structure of a scene using two or more 2D images of the scene, each acquired from a different viewpoint is a challenging task of stereovision. Defining pixel correspondence in stereo pairs is a fundamental process for automated image based effective 3D reconstruction. This paper presents modified Census based approach for local cost optimization where local matching cost is combined with Sum of Squared Absolute Differences (SSAD) of the image color values and then aggregated. From the aggregated cost, effective disparity map is obtained using Semi Global Matching (SGM) which improves the quality of the matches. This proposed approach represents a fusion of state of the art algorithms to improve the matching quality with reduced number of bad pixels. Finally, a stereoscopic 3D view will be obtained by merging triangulation algorithm in the realistic manner. Because of more realistic depth perception, our proposed three dimensional stereo model finds application in medical field where it improves surgical success with shorter operation time and research in space where effective analysis can be made by using calibrated photo realistic 3D model of the space structure.

 

Keywords: Cost function, disparity map, modified census transform, SAD, SGM, SSAD, triangulation.

 

Received August 25, 2013; accepted March 14, 2014

 

Wednesday, 28 October 2015 07:10

 

Efficient Modified Elliptic Curve Diffie-Hellman Algorithm for VoIP Networks

Subashri Thangavelu1 and Vaidehi Vijaykumar2

1Department of Electronics Engineering, Anna University, India

2AU-KBC Research Centre, Anna University, India

Abstract: Security in Voice over Internet Protocol (VoIP) network has turned to be the most challenging issue in recent years. VoIP packets are easy to eavesdrop on by hackers due to the use of Diffie-Hellman (DH) algorithm for single common key exchange between two end-users. As a result the confidentiality of voice data turns to be a challenging issue. There is a need for strong key management algorithm to secure voice data from all kinds of attacks. In this paper, an efficient Modified Elliptic Curve Diffie-Hellman (MECDH) using Split Scalar Multiplication (SSM) algorithm is proposed, which secures voice data from Man-in-the Middle (MITM) attack by dynamically generating the shared key. Further, in order to speed up the scalar multiplication used in traditional Elliptic Curve Diffie Hellman (ECDH) algorithm, the SSM technique is adopted in the proposed MECDH algorithm. The performance of the proposed MECDH algorithm is compared with the traditional ECDH and validated in Java platform. From the results obtained, it is observed that the computation time taken by the proposed MECDH algorithm is 89% lesser than the traditional ECDH algorithm and 11% lesser than the key changing ECDH. Also, high security level is achieved with the proposed idea of using dynamic keys instead of single common shared secret key.

Keywords: Elliptic curve, key exchange, key change, MITM attack, SSM, computation time.

 

Received March 5, 2013; accepted September 13, 2014

 

 

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…