International Journal of Computers

 
E-ISSN: 1998-4308
Volume 7, 2013

Main Page

Submit a paper | Submission terms | Paper format

 


Issue 1, Volume 7, 2013


Title of the Paper: Reverse Engineering of the Digital Curve Outlines Using Genetic Algorithm

 

Authors: Muhammad Sarfraz, Malik Z. Hussain, Misbah Irsha

Pages: 1-10

Abstract: A scheme, which consists of an iterative approach for the recovery of digitized, hand printed and electronic planar objects, is proposed. It vectorizes the generic shapes by recovering their outlines. The rational quadratic functions are used for curve fitting and a heuristic technique of genetic algorithm is applied to find optimal values of shape parameters in the description of rational functions. The proposed scheme is fully automated and vectorizes the outlines of planar images in a reverse engineering way.


Title of the Paper: Fetal Weight and Gender Estimation Using Computer based Ultrasound Images Analysis

 

Authors: Yudha Noor Aditya, Heamn Noori Abduljabbar, Christina Pahl, Lai Khin Wee, Eko Supriyanto

Pages: 11-21

Abstract: A scheme, which consists of an iterative approach for the recovery of digitized, hand printed and electronic planar objects, is proposed. It vectorizes the generic shapes by recovering their outlines. The rational quadratic functions are used for curve fitting and a heuristic technique of genetic algorithm is applied to find optimal values of shape parameters in the description of rational functions. The proposed scheme is fully automated and vectorizes the outlines of planar images in a reverse engineering way.


Title of the Paper: Gifted Students’ Use of Web 2.0 Technologies for English Language Learning

 

Authors: M. M. Yunus, L. S. L. Kwan

Pages: 22-30

Abstract: This paper proposes a fetal gender and weight estimation method using ultrasound images. Normally the gender and weight estimation is done by the physician through the observation only. The accuracy of the gender and weight estimation strongly dependent on the experiences of the physician as the images of ultrasound are not clear for those low cost two dimension (2D) ultrasound. In order to increase the accuracy of the gender and weight estimation during the fetal scanning process, a method has been developed by using thresholding and canny segmentation. The percentage of white level in the processed image is calculated to classify the gender of the fetus. Canny edge detection is used for segmentation, and then parameters such as femur length (FL), biparietal diameter (BPD) and abdominal circumference (AC) have been measured to estimate the fetal weight. The results shows that the percentage value equal or larger than 46% will be considered as a male fetus. While images with the percentage of white level less than 46%, the fetus will be classified as a female. Then, fetal weight is calculated based on the parameters measurement obtained through original fetal ultrasound images and segmented fetal ultrasound images, then both methods have been compared and the difference is 40 grams. The method can be further developed and applied in the low cost two dimension ultrasound machine to improve the accuracy of gender identification and reduce the human error during the scanning process.


Issue 2, Volume 7, 2013


Title of the Paper: General Environment for Probabilistic Predictive Monitoring

 

Authors: Silvano Mussi

Pages: 31-49

Abstract: The proposal presented in the paper concerns a general environment for probabilistic predictive monitoring. More precisely, the paper is conceptually subdivided in three parts. The first part presents the theoretical model underlying the proposal. In particular, the model is turn presented as a hierarchy of three conceptual levels. The first conceptual level is represented by a set of basic concepts and definitions. This first level is used like a platform on which the second conceptual level, represented by the definition of time-slices based causal network, is built. This second level is, in turn, a platform on which the third conceptual level, represented by the definition of probabilistic network, is built. This last level contains the mathematical foundations of the model and defines a general probabilistic prediction algorithm that can be applied to real world problems in heterogeneous domains. The second part of the paper presents a general predictive monitoring tool in which the predictive algorithm, defined in the first part, is embedded. Since such a general tool needs to be equipped with specific domain knowledge in order to be usefully applied to real world problems, the third part of the paper presents a general environment in which users can easily build, use and administer specific predictive monitoring tools equipped with proper domain knowledge related to specific application fields.


Title of the Paper: Incremental Continuous Ant Colony Optimization for Tuning Support Vector Machine’s Parameters

 

Authors: Hiba Basim Alwan Al-Dulaimi, Ku Ruhana Ku-Mahamud

Pages: 50-57

Abstract: Support Vector Machines are considered to be excellent patterns classification techniques. The process of classifying a pattern with high classification accuracy counts mainly on tuning Support Vector Machine parameters which are the generalization error parameter and the kernel function parameter. Tuning these parameters is a complex process and Ant Colony Optimization can be used to overcome the difficulty. Ant Colony Optimization originally deals with discrete optimization problems. Hence, in applying Ant Colony Optimization for optimizing Support Vector Machine parameters, which are continuous in nature, the values wil have to be discretized. The discretization process will result in loss of some information and, hence, affects the classification accuracy and seeks time. This paper presents an algorithm to optimize Support Vector Machine parameters using Incremental continuous Ant Colony Optimization without the need to discretize continuous values. Eight datasets from UCI were used to evaluate the performance of the proposed algorithm. The proposed algorithm demonstrates the credibility in terms of classification accuracy when compared to grid search techniques, GAwith feature chromosome-SVM, PSO-SVM, and GA-SVM. Experimental results of the proposed algorithm also show promising performance in terms of classification accuracy and size of features subset.


Title of the Paper: Application of Hidden Markov Model for Human Mobility Modeling

 

Authors: Ha Yoon Song

Pages: 58-68

Abstract: Detailed Mobility Models of humans is one of the essential knowledge for mobile computing, location based service, sociology, urban planning and any other related fields. The human mobility model is having been complicated according to the expansion of cities, the development of life style and many other issues. Nowadays, portable mobile devices can be equipped with GPS or other positioning functionality and thus the set of location data which contains mobility pattern can easily be collected and further processed. In this paper we will show the process to construct human mobility models from positioning data set. As a preprocessing stage, notable positions of human mobility were identified from the positioning data sets. Hidden Markov Model was introduced in order to establish human mobility models. Among the various techniques which distill time stamped data to models, Baum-Welch algorithm successfully derives human mobility models with UMDHMM tools. With the possibility of Hidden Markov Model, our model can be expanded to seasonal pattern of human mobility and thus can be basis for other fields.


Issue 3, Volume 7, 2013


Title of the Paper: Transparent-Box: Efficient Software Testing Method Combining Structural and Functional Testing Together

 

Authors: Jay Xiong, Lin Li

Pages: 69-90

Abstract: Existing software testing methods cannot be dynamically used in requirement modeling and system design before detailed coding. But often, more than 85% of the critical defects in a software product development are introduced into the product in the requirement modeling process and the product design process. Therefore, it is easy to understand why NIST (National Institute of Standards and Technology) concluded that “Briefly, experience in testing software and systems has shown that testing to high degrees of security and reliability is from a practical perspective not possible.” This paper presents a new software testing method called Transparent-Box combining functional testing and structural testing together seamlessly with a capability to automatically establish bidirectional traceability among the related documents and test cases and the corresponding source code according to the test case description. To each test case this method not only helps users check whether the output (if any, can be none when it is dynamically used in requirement development and product design) is the same as what is expected, but also helps users check whether the execution path covers the expected one specified in control flow, so that this method can be dynamically used in the entire software development process from the first place down to the retirement of a software product to find functional defects, logic defects, and inconsistency defects.


Title of the Paper: Predicting Future Change Requests in Agile Software Engineering

 

Authors: Samir Omanovic, Emir Buza

Pages: 91-98

Abstract: Software engineering based on agile methods is different than plan driven in many aspects. Based on our practical experience in agile software engineering we concluded that one of the most important success factors is predicting future change requests. This article emphasizes importance of the future change requests frequency as a very important analysis factor for the later solution selection and software maintenance. It describes a positive experience related to the agile software engineering of the software system for the data import in an environment with frequent change requests, through a case study. The main reason for the success is that the estimation about future changes is taken into account during the analysis. Data import is based on the web service for the XML upload and Oracle database objects for importing, storing and checking data. Meta-model based design is applied to gain flexibility and meet customer’s frequent change requests. A change request is implemented through changing the meta-model parameters which is fast and reliable. There were many change requests through the life of this software system and all of them where low cost changes. Initial higher cost to develop the software that is easy changeable is reimbursed later during the software evolution. Also, changes are implemented fast, with a minimum effort, with the high quality and with the high customer satisfaction.


Title of the Paper: Multi-biometric Face Recognition System Using Levels of Fusion

 

Authors: Elizabeth Garcia-Rios, Enrique Escamilla-Hernandez, Gualberto Aguilar-Torres, Omar Jacobo-Sanchez, Mariko Nakano-Miyatake, Hector Perez-Meana

Pages: 99-108

Abstract: This paper proposes the implemental of a Multi-biometric face recognition system based on levels of fusion using stereo images. Only three fusion levels are used for developing the proposed face recognition system which are; sensor level fusion, feature level fusion and decision level fusion; where for each level of fusion was used the Eigenfaces and Gabor Filters for the feature extraction while for data classification the back propagation neural network and Support Vector Machine are used. At each level of fusion it is possible to evaluate the performance of proposed face recognition marking the highest identification and verification rate. Giving the best result the feature level fusion that consist of two images with different angles of face, because with this we get more coefficients or information that constitute a new template of the face. This provides a higher recognition rate in comparison with the systems that use a picture with a single angle. Also was observed the system behavior with different features extraction and classification algorithms. Evaluation results show that the best results are obtained using the Gabor filters with the Support Vector Machine, because in this case the recognition rates are higher and with less computation time.


Title of the Paper: Comparison of Naϊve Bayes and SVM Classifiers in Categorization of Concept Maps

 

Authors: Krunoslav Zubrinic, Mario Milicevic, Ivona Zakarija

Pages: 109-116

Abstract: Concept map is a graphic tool which describes a logical structure of knowledge in the form of connected concepts. Many persons create and use concept maps as planning, knowledge representation or evaluation tool, and store them in a public repository. In such environment contents and quality of these maps vary. When user wants to use specific map, they have to know to which domain that map belongs. Many creators do not pay enough attention to complete and accurate labeling of their documents. Manually categorization of maps in large repository is almost impossible as it is a very long and demanding procedure. In such environment automatic classification of concept maps according to their content can help users to identify the relevant map. There are very few researches on automatic classification of concept maps. In this paper we propose method for automatic categorization of concept maps using simple bag of words. In our experiment, data for classification are taken from a set of public available CMs Fetched maps are filtered by language and parsed. Concepts’ labels are extracted from filtered set of CMs, preprocessed and prepared for classification. The most important features are selected and data are prepared for learning and classification. Training and classification are performed using na¨ıve Bayes and SVM classifiers. Achieved results are promising, and with further data preprocessing and adjustment of the classifiers we consider that they can be improved.


Title of the Paper: Conceptual Analysis and Modeling of a Metro System

 

Authors: A. Spiteri Staines

Pages: 117-125

Abstract: This work considers the problems associated with different levels of complexity and decomposition for modeling a large transport system such as an underground metro. Such a system shares many parallelisms with many complex computer systems. Many techniques based on object oriented analysis and decomposition can be found. The problem is formulated and the France Paris metro system is considered. Different considerations related to decomposition, functionality and system representation have to be considered for this autonomous system. For a solution three main levels or views are used. These are i) top, ii) middle and iii) low level view. The solutions are based on the three main levels, hierarchy, modularization, decomposition and certain assumptions. The implementation is explained and discussed.


Issue 4, Volume 7, 2013


Title of the Paper: Iterative Text Clustering of Search Results

 

Authors: Gábor Szűcs, Zoltán Móczár

Pages: 127-134

Abstract: Search results clustering, which clusters returned documents, is the most preferred approach for re-organizing search results. This paper deals with text clustering problem, namely many parameters influence the inner operation of the well-known clustering algorithms, but an average user is not able to set these parameters accordingly. In this paper a novel approach is proposed, which solves this problem by an iterative method based on the user feedback. In our solution some questions are generated for the user in order to offer the most appropriate result selected from the possible ones. We have developed a complex clustering algorithm, which automatically optimizes the parameter values based on the user answers. Our method calculates some possible results, which are probably one of the best results from the user’s point of view. The main purpose of applying the user feedback is to give possibility to interact the search results and to achieve a better topic understanding about the given query.


Title of the Paper: Routing Optimization for ATM Cash Replenishment

 

Authors: Peter Kurdel, Jolana Sebestyénová

Pages: 135-144

Abstract: The cash deployment strategy for a network of ATMs should take into account the analysis of inventory policies, logistics costs as well as the routing of replenishment vehicles. The optimal strategy has to focus on the reduction of cash-related expenses while safeguarding that ATMs do not run out of cash. Shorter routes with high time window constraints violations are not always the best solution. The problem, which can be tackled as a kind of rich vehicle routing problem is in the paper solved using parallel genetic algorithm. The proposed model1 is able to solve cases with simultaneous requirements of several replenishments for some of the customers (ATMs) several times daily as well as a single replenishment in several days for other groups of customers. Dynamic vehicle routing problem (VRP) must rely on up-to-date information. One type of dynamic model may consider new customer orders that arise after the routes had been initially planned. In the light of this information, the vehicles need to be re-routed so as to reduce costs and meet customer service time windows.


Title of the Paper: Robust Steganography Based on QIM Algorithm to Hide Secret Images

 

Authors: Oswaldo Juarez-Sandoval, Angelina Espejel-Trujillo, Mariko Nakano-Miyatake, Hector Perez-Meana

Pages: 145-152

Abstract: The steganography research has grown rapidly since last decade. This technique has been used to hide different types of information, such as medical, personal and business information and also in some cases it was used in criminal act. This paper presents a robust steganographic scheme focused on the embedding of a secret image into a cover image in the DCT domain using QIM embedding algorithm. The experimental results show the robustness of the proposed scheme against the JPEG compression and noise contamination, while keeping an imperceptibility of hidden data. The proposed scheme also is robust to commercial stego-analyzers, which cannot detect the presence of the hidden data in the stegoimage generated by the proposed scheme. The better performance of the proposed scheme is shown comparing with a previously reported steganography algorithm with the same objective of the proposed one.


Title of the Paper: Modified GA with the Possibility of Selecting a Selection Operator According to a Set Criterion

 

Authors: Razija Turcinhodzic, Zikrija Avdagic, Samir Omanovic

Pages: 153-161

Abstract: Genetic algorithms are used to solve complex problems in various areas. Research related to genetic algorithms mainly focuses on its three operators: selection, crossover, and mutation. The need to improve the algorithm has led to the creation of different operators out of the three mentioned, many of which are adapted to specific problems. This paper deals with the most commonly used selection operators, and their influence on the efficiency and robustness of the genetic algorithm. The idea behind this paper is to combine selection operators inside the genetic algorithm during its execution to decrease the risk of selecting the inappropriate selection operator for the considered test function. Operators are combined so that preference in the current generation is given to the operator which produces the most suitable population according to the set criteria after crossover and mutation. The criteria used in this paper are the best average overall fitness of the population and the best individual fitness. This research has shown that the change in selection operators within genetic algorithm has positive effects on its functionality.