|
ISSN: 1998-4308
Year 2011
All papers of the journal were peer reviewed by two
independent reviewers. Acceptance was granted when both
reviewers' recommendations were positive.
Main
Page
Paper Title, Authors, Abstract (Issue 1, Volume 5,
2011) |
Pages |
Information Retrieval and
Information Extraction in Web 2.0 Environment
Nikola Vlahovic
Abstract:
With the rise of Web 2.0 paradigm new trends in
information retrieval (IR) and information
extraction (IE) can be observed. Significance of IR
and IE as fundamental method of acquiring new and
up-to-date information is crucial for efficient
decision making. Social aspects of modern
information retrieval are gaining on its importance
over technical aspects. The main reason for this
trend is that IR and IE services are becoming more
and more widely available to end users that are not
information professionals but regular users. Also
new methods that rely primarily on user interaction
and communication show similar success in IR and IE
tasks. Web 2.0 has overall positive impact on IR and
IE as it is based on a more structured data platform
than the earlier web. Moreover, new tools are being
developed for online IE services that make IE more
accessible even to users without technical knowledge
and background. The goal of this paper is to review
these trends and put them into context of what
improvements and potential IR and IE have to offer
to knowledge engineers, information workers, but
also typical Internet users.
|
1-9 |
Tiny Programming Language to
Improve Assembly Generation for Automation
Equipments
Jose Metrolho, Monica Costa, Fernando Reinaldo
Ribeiro
Abstract:
The development time in industrial informatics
systems, within industry environments, is a very
important issue for competitiveness. The usage of
adequate target-specific programming languages is
very important because they can facilitate and
improve developers’ productivity, allowing solutions
to be expressed in the idiom and at the level of
abstraction of the problem’s domain. In this paper
we present a target-specific programming language,
which was designed to improve the design cycle of
code generation, for an industrial embedded system.
The native assembly code, the new language structure
and their constructs, are presented in the paper.
The proposed target-specific language is expressed
using words and terms that are related to the
target’s domain and consequently it is now easier to
program, understand and to validate the desired
code. It is also demonstrated the language
efficiency by comparing some code described using
the new language against the previous used code. The
design cycle is improved with the usage of the
target-specific language because both description
and debug time are significantly reduced with this
new software tool. This is also a case of
university-industry partnership.
|
10-17 |
Solving Multiobjective
Optimization under Bounds by Genetic Algorithms
Anon Sukstrienwong
Abstract:
For complex engineering optimizing problems, several
problems are required to be controlled within the
specific interval in which something can operate or
act efficiently. Most researchers minimize the
objective vector into a single objective and
interested in the set known as Pareto optimal
solution. However, in this paper is concerned with
the application of genetic algorithm to solve
multi-objective problems in which some objectives
are requested to be balanced within its objective
bounds. The proposed approach called genetic
algorithms for objective boundary (GAsOB scheme) for
searching the possible solutions for the particular
multi-objectives problems. The elite technique is
employed to enhance the efficiency of the algorithm.
The experimental results have compared with the
results derived by a linear search technique and
traditional genetic algorithms through the search
space. From the experimental results, GAsOB scheme
generates the solution efficiently with
customization of the number of eras and immigration
rate.
|
18-25 |
Image Authentication and
Recovery Using BCH Error-Correcting Codes
Jose Antonio Mendoza Noriega, Brian M. Kurkoski,
Mariko Nakano Miyatake, Hector Perez Meana
Abstract:
In this paper an image authentication and recovery
algorithm is proposed where the modified areas in an
image are detected, and in addition an approximation
of the original image, called a digest image Cdig,
is recovered. Twodifferent watermarks are used. One
semi-fragile watermark w1 is used for the
authentication phase. The second watermark wdig, is
obtained by compressing the digest image Cdig using
an arithmetic code, then redundancy is added by
applying a BCH error correcting code (ECC). Finally
both watermarks are embedded in the integer wavelet
transform (IWT) domain. The proposed scheme is
evaluated from different points of view: watermark
imperceptibly, payload, detection of the tamper area
and robustness against some non-intentional attacks.
Experimental results show the system detects
accurately where the image has been modified, and it
is able to resist large modifications; for example,
the system can tolerate modifications close to 10%
of the total pixels of the watermarked image and
recover the 100% of the digest image. The
watermarked image and recovered digest image have
good quality, with average PSNR 39.88 dB and 28.63
dB, respectively, using ECC rate 0.34. The proposed
system also is robust to noise insertion. It is able
to tolerate close to 5% errors produced by salt and
pepper noise insertion, while recovering 100% of the
digest image.
|
26-33 |
A Face Recognition Algorithm
using Eigenphases and Histogram Equalization
Kelsey Ramirez-Gutierrez, Daniel Cruz-Perez, Jesus
Olivares-Mercado, Mariko Nakano-Miyatake, Hector
Perez-Meana
Abstract:
This paper proposes a face recognition algorithm
based on histogram equalization methods. These
methods allow standardizing the faces illumination
reducing in such way the variations for further
features extraction; which are extracted using the
image phase spectrum of the histogram equalized
image together with the principal components
analysis. Proposed scheme allows a reduction of the
amount of data without much information loss.
Evaluation results show that the proposed feature
extraction scheme, when used together with the
support vector machine (SVM), provides a recognition
rate higher than 97% and a verification error lower
than 0.003%.
|
34-41 |
The Chinese as Second Language
Multidimensional Computerized Adaptive Testing
System Construction
Hsuan-Po Wang, Bor-Chen Kuo, Rih-Chang Chao, Ya-Hsun
Tsai
Abstract:
With rising demand of Chinese as Second Language (CSL)
learning, Chinese Proficiency Test became more and
more popular recently. There are several major
proficiency tests with paper-pencil (P&P) formats
for Chinese learners including Taiwan's test of
proficiency-Huayu (TOP-Huayu), the mainland's Hanyu
Shuiping Kaoshi (HSK), and America's Scholastic
Assessment Test (SAT). In this study, Common
European Framework Reference (CEFR) is applied and
CSL Proficiency Index is used as guidelines to
develop a multidimensional computerized adaptive
testing (MCAT) system for enhancing the CSL
proficiency test. This research collected empirical
data via the computerized based test (CBT) followed
by developing and conducting a simulation study on a
MCAT system. The proposed system provides a
framework of using item response theory (IRT) as the
ability scoring method and applies to the process as
a MCAT. In addition, this research will also go
through the evaluation of the effectiveness of the
process on MCAT system. There were 658 empirical
data collected from Grace Christian Collage in
Philippine on September 2009. At the end of this
research the result indicated that recommend CSL
MCAT System applied MAP as the ability estimation
method for this MCAT System. The interface of the
MCAT system is also present at this research.
|
42-49 |
A Non-Secure Information
Systems and the Isolation Solution
Tai-Hoon Kim
Abstract:
In this paper, we define Intrusion Confinement
through isolation to address such security issue,
its importance and finally present an isolation
protocol. Security has emerged as the biggest threat
to information systems. System protection mechanisms
such as access controls can be fooled by authorized
but malicious users, masqueraders, and trespassers.
As a result, serious damage can be caused either
because many intrusions are never detected or
because the average detection latency is too long.
|
50-57 |
Path-Bounded Finite Automata on
Four-Dimensional Input Tapes
Yasuo Uchida, Takao Ito, Makoto Sakamoto, Ryoju
Katamune, Kazuyuki Uchida, Hiroshi Furutani, Michio
Kono, Satoshi Ikeda, Tsunehiro Yoshinaga
Abstract:
M.Blum and C.Hewitt first proposed two-dimensional
automata as a computational model of two-dimensional
pattern processing, and investigated their pattern
recognition abilities in 1967. Since then, many
researchers in this field have been investigating
many properties about automata on two- or
three-dimensional tapes. By the way, the question of
whether processing four-dimensional digital patterns
is much difficult than two- or three-dimensional
ones is of great interest from the theoretical and
practical standpoints. Recently, due to the advances
in many application areas such as computer
animation, motion image processing, virtual reality
systems, and so forth, it has become increasingly
apparent that the study of four-dimensional pattern
processing has been of crucial importance. Thus, the
study of four-dimensional automata, i.e.,
fourdimensional automata with the time axis as a
computational model of four-dimensional pattern
processing has also been meaningful. On the other
hand, the comparative study of the computational
powers of deterministic and nondeterministic
computations is one of the central tasks of
complexity theory. This paper investigates the
computational power of nondeterministic computing
devices with restricted nondeterminism. There are
only few results measuring the computational power
of restricted nondeterminism. In general, there are
three possibilities to measure the amount of
nondeterminism in computation. In this paper, we
consider the possibility to count the number of
different nondeterministic computation paths on any
input. In particular, we deal with seven-way
four-dimensional finite automata with multiple input
heads operating on four-dimensional input tapes.
|
58-65 |
A Relationship between Marker
and Inkdot for Four-Dimensional Automata
Yasuo Uchida, Takao Ito, Makoto Sakamoto, Ryoju
Katamune, Kazuyuki Uchida, Hiroshi Furutani, Michio
Kono, Satoshi Ikeda, Tsunehiro Yoshinaga
Abstract:
A multi-marker automaton is a finite automaton which
keeps marks as pebbles in the finite control, and
cannot rewrite any input symbols but can make marks
on its input with the restriction that only a
bounded number of these marks can exist at any given
time. An improvement of picture recognizability of
the finite automaton is the reason why the
multi-marker automaton was introduced. On the other
hand, a multi-inkdot automaton is a conventional
automaton capable of dropping an inkdot on a given
input tape for a landmark, but unable to further
pick it up. Due to the advances in many application
areas such as moving image processing, computer
animation, and so on, it has become increasingly
apparent that the study of four-dimensional pattern
processing has been of crucial importance. Thus, we
think that the study of four-dimensional automata as
a computational model of four-dimensional pattern
processing has also been meaningful. This paper
deals with marker versus inkdot over
four-dimensional input tapes, and investigates some
properties.
|
66-73 |
Effort and Cost Allocation in
Medium to Large Software Development Projects
Kassem Saleh
Abstract:
The proper allocation of financial and human
resources to the various software development
activities is a very important and critical task
contributing to the success of the software project.
To provide a realistic allocation, the manager of a
software development project should account for the
various activities needed to ensure the completion
of the project with the required quality, on-time
and within-budget. In this paper, we provide
guidelines for cost and effort allocation based on
typical software development activities using
existing requirements-based estimation techniques.
|
74-79 |
A Halftoning-Based Multipurpose
Image Watermarking with Recovery Capability
Carlos Santiago-Avila, Mario Gonzalez-Lee, Mariko
Nakano-Miyatake, Hector Perez-Meana
Abstract:
Nowadays digital watermarking has become an
important technique, because using computational
tools, digital contents can be copied and/or
modified easily. At the beginning, the digital
watermarking has been used for either copyright
protection purpose or content authentication
purpose. However, in many situations both purposes
(copyright protection and content authentication)
are required to be satisfied at same time. The
watermarking scheme that satisfies both purposes is
called multipurpose watermarking scheme. In this
paper, a novel multipurpose watermarking scheme is
proposed, in which a self-embedding technique based
on halftoning is used for content authentication and
recovery purpose, and a binary pattern is embedded
into the halftone image using quantization-based
embedding method for copyright protection purpose.
Experimental results show favorable performance of
the proposed algorithm.
|
80-87 |
A Logical Approach to Image
Recognition with Spatial Constraints
R. K. Fedorov, A. O. Shigarov
Abstract:
In this paper an approach to recognizing objects on
images is proposed. The approach is based on a
logical inference in CLP Prolog using structural
descriptions of objects. Searching edges of objects
on image is performed as a unification of built-in
predicate line satisfying a set of constraints
defined by the description. Structural description
is presented as rules of CLP Prolog.
|
88-95 |
Extended Residual Aggregated
Risk Assessment – A Tool For Managing Effectiveness
of the IT Audit
Traian Surcel, Cristian Amancei, Ana-Ramona Bologa,
Alexandra Florea, Razvan Bologa
Abstract:
This paper proposes an audit methodology which aims
to identify key risks that arise during the IT audit
within an organization and presents the impact of
identified risks. This involves evaluating the
organization's tolerance to IT systems
unavailability, identifying auditable activities and
subtasks, identifying key risk factors and the
association of weights, evaluating and classifying
significant risks identified, conducting audit
procedures based on questionnaires and tests and
assessing the remaining aggregate risk that was not
reduced by effective controls. Verifying the
existence of compensating controls and the
possibility of their implementation in an iterative
manner, followed by a reassessment of covered risks,
after each iteration, eventually provides an
insignificant remaining aggregate risk. The
development of the audit mission has to be
correlated with the corporate governance
requirements, the quality assurance and marketing
the audit function. The results obtained are
evaluated by taking into consideration the
confidentiality and integrity of resources involved.
|
96-105 |
The Effect of Organizational
Readiness on CRM and Business Performance
Cristian Dutu, Horatiu Halmajan
Abstract:
CRM is a business strategy which aims to create
value for both organization and customers through
initiating and maintaining customer relationships.
As a core strategy, CRM is based on using a
marketing information system and the company’s IT
infrastructure. CRM technology plays an important
role in creating customer knowledge, which is the
core of any CRM initiative. The CRM strategy will
not yield the expected results without the proper
use of information technology in the CRM processes.
Organisational CRM readiness is related to the level
of available technological resources which may be
oriented towards CRM implementation. This paper
examines the direct outcomes of the CRM activities,
as well the relationship among these outcomes and
business performance. We also analysed the effect of
the level of organisational CRM readiness on the
degree to which companies implemented CRM
activities. We conducted a survey on 82 companies
operating in the Western region of Romania, which
revealed that CRM implementation generates superior
business performance.
|
106-114 |
Extending a Method of
Describing System Management Operations to
Energy-Saving Operations in Data Centers
Matsuki Yoshino, Michiko Oba, Norihisa Komoda
Abstract:
The authors propose a method for describing system
management operations based upon patterns identified
by analyzing operations in data centers. Combined
with a CMS (Configuration Management System) defined
in ITIL®(Information Technology Infrastructure
Library), it is possible to calculate energy
consumption of an information system managed by
management operations described by the proposed
method. To demonstrate the effectiveness of the
method, examples of saving energy in operations
described by the proposed method are shown with an
example of calculating the energy savings.
|
115-122 |
Vehicle Track Control
Debnath Bhattacharyya, Tai-Hoon Kim
Abstract:
Lane Design for Optimal Traffic (LDOT) is considered
as an effective tool to improve the level of traffic
services. It integrates the newly emerged IT
technologies with the traditional traffic
engineering. By providing the traffic partners with
better communications, LDOT can significantly boost
the traffic managements and operations. Meanwhile,
however, the deployment of the LDOT applications
often involves a huge amount of investments, which
may be discouraging in a challenging economy like
now. Therefore how to increase the
cost-effectiveness of the LDOT systems is a widely
concerting issue. There has been a limited research
effort on the optimization of the LDOT systems. Lane
Design for Speed Optimization (LDSO) presents a new
critical lane analysis as a guide for designing
speed optimization to serve rush-hour traffic
demands. Physical design and speed optimization are
identified, and methods for evaluation are provided.
The Lane Design for Speed optimization (LDSO)
analysis technique is applied to the proposed design
and speed optimization plan. Lane Design for Speed
Optimization can robustly boost the speed management
and operations. Therefore how to increase the Speed
optimization of the Lane is widely concerting issue.
There has been a limited research effort on the
optimization of the LDSO systems Design of Non
Accidental Lane (DNAL) presents a new optimal lane
analysis as a guide for designing of non accidental
lane to serve better utilization of lane. The
accident factors adjust the base model estimates for
individual geometric design element dimensions and
for traffic control features. The Design of Non
Accidental Lane (DNAL) analysis technique is applied
to the proposed design and speed optimization plan.
Design of Non Accidental Lane can robustly manage
and operations on lane for avoiding accident.
Therefore how to increase the Speed optimization
with non accidental zone of the Lane is widely
concerting issue. There has been a limited research
effort on the optimization of the DNAL systems.
|
123-131 |
Faster Facility Location and
Hierarchical Clustering
J. Skala, I. Kolingerova
Abstract:
We propose several methods to speed up the facility
location, and the single link and the complete link
clustering algorithms. The local search algorithm
for the facility location is accelerated by
introducing several space partitioning methods and a
parallelisation on the CPU of a standard desktop
computer. The influence of the cluster size on the
speedup is documented. The paper further presents
the computation of the single link and the complete
link clustering on the GPU using the CUDA
architecture.
|
132-139 |
Paper Title, Authors, Abstract (Issue 2, Volume 5,
2011) |
Pages |
A Question Answering System on
Domain Specific Knowledge with Semantic Web Support
Borut Gorenjak, Marko Ferme, Milan Ojstersek
Abstract:
In today’s world the majority of information is
accessible via the World Wide Web. A common way to
access this information is through information
retrieval applications like web search engines. We
already know that web search engines flood their
users with enormous amount of data from which they
cannot figure out the essential and most important
information. These disadvantages can be reduced with
question answering systems. The basic idea of
question answering systems is to be able to provide
answers to a specific question written in natural
language. The main goal of question answering
systems is to find a specific answer. This paper
presents an architecture of our ontology-driven
system that uses semantic description of the
processes, databases and web services for question
answering system in the Slovenian language.
|
141-148 |
Using Geographic Information
System for Wind Parks’ Software Solutions
Adela Bara, Anda Velicanu, Ion Lungu, Iuliana Botha
Abstract:
A Geographic Information System can be used in order
to store, analyze and predict data regarding wind
parks. Such data can refer to the natural factors
that can affect the wind turbines, the placement of
the turbines or their power capacity. In this paper
we discuss the possibility to manage wind parks in
Romania, based on the wind speed and altitude of
different regions.
|
149-156 |
A Phased Migration Strategy to
Integrate the New Data Acquisition System into the
Laguna Verde Nuclear Power Plant
Ramon Montellano-Garcia, Ilse Leal-Aulenbacher,
Hector Bernal-Maldonado
Abstract:
This paper focuses on the strategy applied in the
gradual integration of a new data acquisition system
with the online Plant Process Computer of the Laguna
Verde Nuclear Power Plant. Due to the fact that the
data acquisition modules needed to be replaced, the
need for a New Acquisition System arose. The issue
of whether or not to embark on a complete or modular
replacement of its elements required careful
consideration. At Laguna Verde, we opted for a
phased migration approach, considering two main
aspects: that the plant monitoring must remain
online during the whole process, because it is
required for plant operation and that human machine
interfaces and computations design basis must be
maintained, in order to minimize regulatory impact.
The core of a phased migration strategy hinges on a
flexible modular system capable of accepting data
streams from multiple data acquisition systems and
computers and consolidating this data for their
presentation in control room displays and in the
power plant historical archive. This paper describes
the methodology that was applied to integrate the
new data acquisition system into the legacy system,
which is based on a real-time mechanism and
historical data stream transfer.
|
157-165 |
Java Interrogation of an
Homogeneous System of Inheritance Knowledge Bases by
Client-Server Technology
Nicolae Tandareanu
Abstract: The
subject developed in this paper is connected by the
remote interrogation of a knowledge base. We suppose
we have a collection of the same kind of knowledge
bases, namely, extended inheritance knowledge bases.
We use the client-server technology to query each
element of such a system of knowledge bases. To
implement the application we used Java technology.
The reasoning process is based on an inference
engine. The mechanism of this engine is based on the
extended inheritance presented in [18], [22] and
[23]. A methodological description is given based on
Java technology. Both the server and client side of
the application are presented step by step. The way
of presentation is divided into stages, each stage
is well defined according to the proposed tasks.
Each step of the presentation can be easily modified
and adapted by a person who wants to write his/her
own application to query a knowledge base by
client-server technology. The use of the extended
inheritance knowledge bases can be explained by the
fact that the inference engine in this case is
easier to write than the inference engine for other
methods of knowledge representation. The last
section enumerates several developing directions.
|
166-174 |
Stereoscopy in Object’s Motion
Parameters Determination
A. Zak
Abstract: Computer vision
is the science and technology of machines which are
able to extract information from an image that is
necessary to solve some task. As a scientific
discipline, computer vision is concerned with the
theory that extract information from images. It must
be noticed that computer vision is still very strong
and fast developing discipline because of technology
expansion especially computers and cameras. The
image data can take many forms, such as video
sequences or views from multiple cameras which is in
interesting of this paper. Paper presents method of
calculation of object’s movement parameters in
three-dimensional space using system which ensure
stereoscopic vision. There was described the
algorithm of movement discovering and moving object
tracking, including methods of separate and
actualization of background, method of
distinguishing moving objects and its position’s
calculation on acquired pictures. Next the methods
of object’s coordinate calculation in three
dimensional space basis on data retrieved from
stereoscopic image computation was discussed in
detail. More over the problem of images
rectification and stereovision system calibration
was in detail discussed. At last the method of
movement’s parameters calculation in 3D space was
described. At the end of the paper some chosen
results of research which were conducted in
laboratory conditions were presented.
|
175-182 |
The Impact of Software Quality
on Maintenance Process
Anas Bassam Al-Badareen, Mohd Hasan Selamat,
Marzanah A. Jabar, Jamilah Din, Sherzod Turaev
Abstract:
The software is always required to be developed and
maintained a quality to the rapid progresses in
industry, technology, economy, and other fields.
Software maintenance is considered as one of the
main issues in software development life cycle that
is required efforts and resources more than other
phase. Studies estimated that the cost of software
maintenance rapidly increased that reached the 90%
of the total cost of software development life
cycle. Therefore, it is considered as an economic
impact in information system community. Several
researches are intended to estimate and reduce the
cost of this task. This study introduces a model of
software maintenance process that emphasizes the
impact of the software quality on the maintenance
process. The study presents the process of the
software maintenance, and then discussed the quality
characteristics that affect these tasks.
Furthermore, the evaluation criteria for these
factors are discussed.
|
183-190 |
Reusable Software Component
Life Cycle
Anas Bassam Al-Badareen, Mohd Hasan Selamat,
Marzanah A. Jabar, Jamilah Din, Sherzod Turaev
Abstract:
In order to decrease the time and effort of the
software development process and increase the
quality of the software product significantly,
software engineering required new technologies.
Nowadays, most software engineering design is based
on reuse of existing system or components. Also, it
is become a main development approach for business
and commercial systems. The concept of reusability
is widely used in order to reduce cost, effort, and
time of software development. Reusability also
increases the productivity, maintainability,
portability, and reliability of the software
products. That is the reusable software components
are evaluated several times in other systems before.
The problems faced by software engineers is not lack
of reuse, but lack of widespread, systematic reuse.
They know how to do it, but they do it informally.
Therefore, strong attention must be given to this
concept. This study aims to propose a systematic
framework considers the reusability through software
life cycle from two sides, build-for-reuse and
build-by-reuse. Furthermore, the repository of
reusable software components is considered, and the
evaluation criteria from both sides are proposed.
Finally, an empirical validation is conducted by
apply the developed framework on a case study.
|
191-199 |
Extending XML Conditional
Schema Representations with WordNet Data
Nicolae Tandareanu, Mihaela Colhon, Cristina Zamfir
Abstract:
Conditional Knowledge Representation and Reasoning
represents a new brand of KR&R, for which several
formalisms have been developed. In this paper we
define XML Language Specifications for a graph-based
representation formalism of such knowledge enriched
with WordNet linguistic knowledge. Our task is to
detect when pairs of words (in our formalism they
are named objects) could be linked by means of is_a
and part_of relationships.
|
200-209 |
Smart Human Face Detection
System
Iyad Aldasouqi, Mahmoud Hassan
Abstract:
Digital Image Processing (DIP) is a
multidisciplinary science that borrows principles
from diverse fields such as optics, surface physics,
visual psychophysics, computer science and
mathematics. Some of image processing applications
can be finding in: astronomy, ultrasonic imaging,
remote sensing, video communications and microscopy.
Face detection/recognition has attracted much
attention and its research has rapidly increased in
many potential applications in computer,
communication and automatic access control system.
Furthermore, face detection as a first step is an
important part of face recognition. Since the image
has lots of variations in appearance, face detection
is not straightforward, such as pose variation,
occlusion, image orientation, illuminating condition
and others. The full face detection and gender
recognition system is made up of a series of
connected components. There are much software that
can facilitate the detection process such as:
Matlab, Labview, C and others. In this paper we
propose a fast algorithm for detecting human faces
in color images using HSV color model without
sacrificing the speed of detection. The proposed
algorithm has been tested on various real images and
its performance is found to be quite satisfactory.
|
210-217 |
An Approach for 3D Object
Recognition of Universal Goods
Bernd Scholz-Reiter, Hendrik Thamer, Claudio Uriarte
Abstract:
Today, unloading processes of standard container
units are mainly executed manually. An automatic
unloading system could automate this labor and time
intensive process step. The crucial challenge in
developing such a system is the object recognition
of goods with undefined shape and size. The
development and the successful market launch of the
Paketroboter© has shown the feasibility of the
correct detection of cubic goods inside a standard
container unit. Nevertheless, there exists no
established system that is able to unload universal
packaged goods. The requirements for a suitable
object recognition system for goods with undefined
shapes are very high. In the case of an high error
rate, the automatic unloading process has to be
aborted or a manually intervention is necessary.
This paper presents a concept that aims to develop
an object recognition system for classification and
pose detection of universal packaged goods inside a
standard container unit. In order to classify
different packaged goods inside a less lighted
container unit significant sensor data is required.
On the basis of the sensor data, the object
recognition system detects all goods and calculates
suitable 3D gripping points for the manipulator
unit. Therefore, range images from Time-of-Flight
cameras and simulated images are used for image
analysis.
|
218-225 |
GPU-Based Translation-Invariant
2D Discrete Wavelet Transform for Image Processing
Dietmar Wippig, Bernd Klauer
Abstract:
The Discrete Wavelet Transform (DWT) is applied to
various signal and image processing applications.
However the computation is computational expense.
Therefore plenty of approaches have been proposed to
accelerate the computation. Graphics processing
units (GPUs) can be used as stream processor to
speed up the calculation of the DWT. In this paper,
we present a implementation of the
translation-invariant wavelet transform using
consumer level graphics hardware. As our approach
was motivated by infrared image processing our
implementation focuses on gray-level images, but can
be also used in color image processing applications.
Our experiments show, that the computation
performance of the DWT could be significantly
improved. However, initialisation and data transfer
times are still a problem of GPU implementations.
They could dramatically reduce the achievable
performance, if they cannot be hided by the
application. This effect was also observed
integrating our implementation in wavelet-based edge
detection and wavelet denoising.
|
226-234 |
Text Analysis with Sequence
Matching
Marko Ferme, Milan Ojstersek
Abstract:
This article describes some common problems faced in
natural language processing. The main problem
consist of a user given sentence, which has to be
matched against an existing knowledge base,
consisting of semantically described words or
phrases. Some main problems in this process are
outlined and the most common solutions used in
natural language processing are overviewed. A
sequence matching algorithm is introduced as an
alternative solution and its advantages over the
existing approaches are explained. The algorithm is
explained in detail where the longest subsequences
discovery algorithm is explained first. Then the
major components of the similarity measure are
defined and the computation of concurrence and
dispersion measure is presented. Results of the
algorithms performance on a test set are then shown
and different implementations of algorithm usage are
discussed. The work is concluded with some ideas for
the future and some examples where our approach can
be practically used.
|
235-242 |
Grid Learning Classifiers - A
Web Based Interface
Manuel Filipe Santos, Wesley Mathew, Henrique Santos
Abstract:
The toolkit for learning classifier system for grid
data mining is a communication channel between
remote users and gridclass system. Gridclass system
is the system for grid data mining, grid computing
approach in the distributed data mining. This
toolkit is a web based system therefore end users
can set the configuration of each node in the grid
environment and execute the grid class system from
the remote location. Mainly, configuration module of
the toolkit is designed for the sUpervised
Classifier System (UCS) as a data mining algorithm.
Toolkit has three fundamental functions such as
creating new project, updating the project, and
executing the project. Initially, user has to define
the project based on the complexity of the problem
to the system. While creating a new project all the
data and configuration information about all nods
are stored in the file under a user defined project
name. The updating phase user can makes changes in
the configuration file or replace the training data
for new experiments. There are two sub functions in
the phase of execution: do the execution of
gridclass system and do the comparison and
evaluation of the performance of the different
executions. Toolkit can store the global model and
related local models and it testing accuracies in
the server system. The main focus of this work is to
improve the performance of learning classifier
system: therefore an attempt is made to compare the
performance of learning classifier system with
different configurations, which has a significant
role. The ROC graph is the best option to represent
the performance of classifier system. Accuracy under
the curve (ACU) is a numerical value to represent
the ROC curve. Therefore, users can easy to measure
the performance of global model with the help of
AUC. Other objective of this work is to provide
friendly environment to the end users and gives
better facilities to evaluate the performance of the
global model.
|
243-251 |
Colour Image Segmentation Using
Relative Values of RGB in Various Illumination
Circumstances
Chiunhsiun Lin, Ching-Hung Su, Hsuan Shu Huang,
Kuo-Chin Fan
Abstract: We propose
a novel colour segmentation algorithm can work in
various illumination circumstances. The proposed
colour segmentation algorithm operates directly on
RGB colour space without the need of colour space
transformation and it is very robust to various
illumination conditions. Our approach can be
employed in various domains (e.g., human skin colour
segmentation, the maturity of tomatoes).
Furthermore, our approach has the benefits of being
insensitive to rotation, scaling, and translation.
In addition, the system can be applied to different
applications, for example, colour segmentation for
fruits (vegetables) quality control by merely
changing the values of the parameters (α, β1, β2,
γ1, γ2). Experimental results demonstrate the
practicability of our proposed approach in colour
segmentation.
|
252-261 |
Research on the Real-time 3D
Image Processing System using Facial Feature
Tracking
Jae-gu Song, Yohwan So, Eunseok Lee, Seoksoo Kim
Abstract:
This research is on the real-time 3D image
processing system using facial feature tracking and
how the system works. When transferring an input 2D
image to a 3D stereoscopic image, this system
provides real-time 3D synthetic images. It also
provides measures to trace a face in the input image
in order to distinguish a person from the background
and includes measures to digitize positional values
within the face by tracking colors and facial
feature points. The real-time 3D image processing
system in this study that uses facial feature
tracking is the preprocessing system for special
effects. Firstly, it allows users to utilize basic
positional values obtained from a person and a
background in the collected images when applying
special effects. Secondly, by checking a 3D
stereoscopic image in real-time, users can verify
composition and image effects prior to application
of special effects. Lastly, data that successfully
generated facial areas can be constantly improved
and used as foundation to standardize facial area
detecting data as well as to create the plug-in.
|
262-269 |
Modified Progressive Strategy
for Multiple Proteins Sequence Alignment
Gamil Abdel-Azim, Mohamed Ben Othman, Zaher
Abo-Eleneen
Abstract: One of the
important research topics of bioinformatics is the
Multiple proteins sequence alignment. Since the
exact methods for MSA have exponential time
complexity, the heuristic approaches and the
progressive alignment are the most commonly used in
multiple sequences alignments. In this paper, we
propose a modified progressive alignment strategy.
Choosing and merging the most closely sequences is
one of the important steps of the progressive
alignment strategy. This depends on the similarity
between the sequences. To measure that similarity we
need to define a distance. In this paper, we
construct a distance matrix. The elements of a row
of this matrix correspond to the distance between a
sequence to other sequences. A guide tree is built
using the distance matrix. For each sequence we
define a descriptor which is called also feature
vector. The elements of the distance matrix are
calculated based on the distance between the
descriptors of the sequences. The descriptor reduces
the dimension of the sequence then yields to a
faster calculation of distance matrix and also to
obtain preliminary distance matrix without pairwise
alignment in the first step. The principle
contribution in this paper is the modification of
the first step of the basic progressive alignment
strategy ie the computation of the distance matrix
which yields to a new guide tree. Such guide tree is
simple to implement and gives good result's
performance. A comparison between the results got
from the proposed strategy and from the ClastalW
over the database BAliBASE 3.0 is analyzed and
reported. The Results of our testing in all dataset
show that the proposed strategy is as good as
Clustalw in most cases.
|
270-280 |
Rule Based Bi-Directional
Transformation of UML2 Activities into Petri Nets
A. Spiteri Staines
Abstract: Many
modern software models and notations are graph
based. UML 2 activities are important notations for
modeling different types of behavior and system
properties. In the UML 2 specification it is
suggested that some forms of activity types are
based on Petri net formalisms. Ideally the mapping
of UML activities into Petri nets should be
bi-directional. The bi-directional mapping needs to
be simplified and operational. Model-to-Model
mapping in theory offers the advantage of fully
operational bi-directional mapping between different
models or formalisms that share some common
properties. However in reality this is not easily
achievable because not all the transformations are
similar. Previous work was presented where it was
shown how Triple Graph Grammars are useful to
achieve this mapping. UML 2 activities have some
common properties with Petri nets. There are
exceptions which require some special attention. In
this paper a simple condensed rule based solution
for complete bi-directional mapping or transforming
UML 2 activities into Petri nets is presented. The
solution should be operational, and can be
represented using different notations. A practical
example is used to illustrate the bi-directional
transformation possibility and conclusions are
explained.
|
281-288 |
Rewriting Petri Nets as
Directed Graphs
A. Spiteri Staines
Abstract: This
work attempts to understand some of the basic
properties of Petri nets and their relationships to
directed graphs. Different forms of directed graphs
are widely used in computer science. Normally
various names are given to these structures. E.g.
directed acyclical graphs (DAGs), control flow
graphs (CFGs), task graphs, generalized task graphs
(GTGs), state transition diagrams (STDs), state
machines, etc. Some structures might exhibit
bisimilarity. The justification for this work is
that Petri nets are based on graphs and have some
similarities to them. Transforming Petri nets into
graphs opens up a whole set of new interesting
possible experimentations. Normally this is
overlooked. Directed Graphs have a lot of theory and
research associated with them. This work could be
further developed and used for Petri net evaluation.
The related works justifies the reasoning how and
why Petri nets are obtained or supported using
graphs. The transformation approach can be formal or
informal. The main problem tackled is how graphs can
be obtained from Petri nets. Possible solutions that
use reduction methods to simplify the Petri net are
presented. Different methods to extract graphs from
the basic or fundamental Petri net classes are
explained. Some examples are given and the findings
are briefly discussed.
|
289-297 |
Detection of Pornographic
Digital Images
Jorge A. Marcial-Basilio, Gualberto Aguilar-Torres,
Gabriel Sanchez-Perez, L. Karina Toscano-Medina,
Hector M. Perez-Meana
Abstract: In
this paper a novel algorithm to detect explicit
content or pornographic images is proposed using the
transformation from the RGB model color to the YCbCr
or HSV color model, moreover using the skin
detection the image is segmented, finally the
percentage of pixels that was detected as skin tone
is calculated. The results obtained using the
proposed algorithm are compared with two software
solutions, Paraben’s Porn Detection Stick and FTK
Explicit Image Detection, which are the most
commercial software solutions to detect pornographic
images. A set of 800 images, which 400 pornographic
images and 400 natural images, is used to test each
system. The proposed algorithm carried out identify
up to 68.87% of the pornographic images, and 14.25%
of false positives, the Paraben’s Porn Detection
Stick achieved 71.5% of recognizing but with 33.5%
of false positives, and FTK Explicit Image Detection
achieved 69.25% of effectiveness for the same set of
images but 35.5% of false positives. Finally the
proposed algorithm works effectively to carry out
the main goal which is to apply this method to
forensic analysis or pornographic images detection
on storage devices.
|
298-305 |
Paper Title, Authors, Abstract (Issue 3, Volume 5,
2011) |
Pages |
Unhealthy Poultry Carcass
Detection Using Genetic Fuzzy Classifier System
Reza Javidan, Ali Reza Mollaei
Abstract:
In this paper automatic unhealthy detection of
poultries in slaughter houses is discussed and a new
real-time approach based on genetic fuzzy classifier
for classification of textural images of poultries
is proposed. In the presented method, after
segmentation of the image into the object (poultry)
and background, the size (area), shape (elongation)
and the color of the object are calculated as
features. Then, these crisp values are converted to
their normalized fuzzy equivalents, between 0 and 1.
A fuzzy rule base system is then used for inferring
that the poultry is normal or not. The parameters of
the fuzzy rule based system are optimized using
genetic algorithm. Finally, if the output of the
optimized fuzzy classifier system shows any
abnormality, the carcass of the poultry should be
omitted from the slaughter. Experimental results on
real data show the effectiveness of the proposed
method.
|
307-313 |
Conceptual Model of Mobile
Services in the Travel and Tourism Industry
Antonio Portolan, Krunoslav Zubrinic, Mario
Milicevic
Abstract:
Today, in a time of economic crisis, companies in
all economic sectors should reevaluate their
strategies to achieve the necessary market success.
Recent studies show that the potential customers
would rather spend their earnings on domestic
equipment and electronic devices like laptops and
mobile phones, than on vacations and traveling. This
behavior generates huge losses for the travel
industry and tourism. The potential solution for
that problem is to connect the mobile industry with
the travel and tourism in a way that will encourage
customers to travel more and enjoy the time by using
interactive and helpful content. In this paper we
discuss the possibility of mobile device integration
in the travel and tourism industry and its impact on
potential customer groups. At the end of paper, a
conceptual model of mobile services integration in
the current travel and tourism industry is
presented.
|
314-321 |
Computational Technologies for
Accreditation in Higher Education
Aboubekeur Hamdi-Cherif
Abstract:
Academic accreditation and assessment in Higher
Education (A3-HE) is, above all, a social status
meant to acknowledge that an institution or program
is following recognized and requested quality
criteria issued from common good practice. In a
previous work, we described the main processes
involved in A3- HE. Two main issues were reported.
First, heavy and tedious paperwork characterizes
actual academic processes. Second, subjective
judgments might interfere with the processes.
Indeed, both the internal self-examination undergone
by institutions / programs and the external
reviewing processes made by recognized accrediting
bodies are prone to errors and subjective biases as
they are largely based on rules of thumb human
judgments – despite the presence of standards. In
this paper, we describe a set of computational
technologies to address these issues. Emphasis is
made on technologies spanning (crude) data,
information, refined information including decision
support, ultimately leading to the most refined and
expensive piece of information, i.e., knowledge and
its discovery in large and diversified databases
over the Web, based on cloud computing solutions. A
human-machine interactive knowledge-based learning
control system for A3-HE is our far-reaching goal.
However, the A3-HE processes are too complex to be
addressed by computerized systems alone. As a
result, scaling up to real-life applications still
require much time to reach tangible implementations.
|
322-331 |
Terminator for E-mail Spam - A
Fuzzy Approach Revealed
P. Sudhakar, G. Poonkuzhali, K. Thiagarajan,
K.Sarukesi
Abstract:
In this information technology world, the highest
degree of communication happens through e-mails.
Realistically most of the inboxes are flooded with
spam e-mails as most of transactions through this
internet is affected by Passive attacks and Active
attacks. Several algorithms exist in the e-world to
defend against spam e-mails. But the fulfilment of
accuracy in deducting spam e-mail is still
oscillating between 80-90%. This clearly shows the
necessity for improvement in spam control algorithms
on various projections. In this proposed work a new
solvent was chosen in the fuzzy word to combat
against spam emails. Various fuzzy rules are created
for spam e-mails and every e-mail is enforced to
pass through fuzzy rule filter for identifying spam.
Results of the each fuzzy rule for the input emails
are derived to classify the e-mail to be spam or
consent.
|
332-345 |
An Automatic Method to Generate
the Emotional Vectors of Emoticons Using Blog
Articles
Sho Aoki, Osamu Uchida
Abstract:
In recent years, reputation analysis and opinion
mining services using the articles written in
personal blogs, message boards, and community web
sites such as Facebook, MySpace, and Twitter have
been developed. To improve the accuracy of the
reputation analysis and the opinion mining, we have
to extract emotions or reactions of writers of
documents accurately. And now, graphical emoticons
(emojis in Japanese) are often used in blogs and
SNSs in Japan, and in many cases these emoticons
have the role of modalities of writers of blog
articles or SNS messages. That is, to estimate
emotions represented by emoticons is important for
reputation analysis and opinion mining. In this
study, we propose a methodology for automatically
generating the emotional vectors of graphical
emoticons automatically using the collocation
relationship between emotional words and emoticons
which is derived from many blog articles. The
experimental results show the effectiveness of the
proposed method.
|
346-353 |
National Healthcare Information
System Integration: A Service Oriented Approach
Usha Batra, Saurabh Mukharjee
Abstract:
Healthcare in our home country, India is a cause of
concern even after 63 years of Independence. There
is a need to create world-class medical
infrastructure in India and to make it more
accessible and affordable to a large cross section
of our people. Introduction of information
technology in healthcare system may eventually
enhance the overall quality of national standards.
The success in current healthcare system requires
reengineering of healthcare infrastructure for
India. For this, there is a high requirement in
India to invest in IT infrastructure to provide
interoperability in healthcare information system.
Also, integration of IT with healthcare system may
lead to open connectivity at all levels (i.e.
InPatient and OutPatient care), ensuring that
patient information is available anytime and right
at the point of care, eliminating unnecessary delay
in treatment, avoiding replication of test reports,
improving more informed decisions and hence leading
to improved quality of care.With this intent, this
paper attempts to present software design patterns
for Service Oriented Architecture (SOA) and its
related technologies for integrating both intra and
inter enterprise stovepipe applications in
healthcare enterprise to avoid replication of
business processes and data repositories. We aim to
develop a common virtual environment for intra and
inter enterprise wide applications in National
Healthcare Information System (NHIS). The ultimate
goal is to present a systematic requirement driven
approach for building an Enterprise Application
Integration (EAI) solution using the Service
Oriented Architecture and Message Oriented
Middleware (MOM) principles. We aim to discuss the
design concept of Enterprise Application Integration
for integration of a healthcare organization and its
business partners to communicate with each other in
a heterogeneous network in a seamless way.
|
354-361 |
A Method to Extract
Unsteadiness of Concept Attributes Based on Weblog
Yosuke Horiuchi, Osamu Uchida
Abstract:
Concept bases are composed of a collection of
concept attributes and used for multiple purposes
such as improving efficiency of information
retrieval and making commonsensical judgments using
computers recently. To construct concept bases, the
data of the dictionaries is generically used.
However, concept attributes are not always static,
that is, some of them shift by the influence of
various events and incidents. For example, it is to
be expected that the attributes of the sports in the
concept attribute of the country holding some sports
event are stronger than usual time, or they are
append to the concept attribute of the country. In
this study, we consider the application of weblogs
to extract the fluctuations of concept attributes.
Many of articles of weblogs are influenced by the
news, and the number of documents of weblogs is very
large. Then, in this study, we propose a new method
to extract the influence of various events and
incidents to attributes by regarding the tags given
to an article as an attribute of the words in the
article, and verify the effectiveness of our method
by an experiment.
|
362-369 |
Semantic Search Itinerary
Recommender System
Liviu Adrian Cotfas, Andreea Diosteanu, Stefan
Daniel Dumitrescu, Alexandru Smeureanu
Abstract:
In this paper we present a novel approach based on
Natural Language Processing and hybrid
multi-objective genetic algorithms for developing
mobile tourism itinerary recommender systems. The
proposed semantic matching technique allows users to
find Points of Interest – POIs that match highly
specific preferences. Moreover, it can also be used
to further filter results from traditional
recommender techniques, such as collaborative
filtering, and only requires a minimal initial input
from the user to generate relevant recommendations.
A hybrid multi-objective genetic algorithm has been
developed in order to allow the tourists to easily
choose between several Pareto optimal itineraries
computed in near real-time. Furthermore, the
proposed system is easy to use, thus it can be
stated that our solution is both complex and at the
same time user-oriented.
|
370-377 |
Symbolic Neural Networks for
Clustering Higher-Level Concepts
Kieran Greer
Abstract:
Previous work has described linking mechanisms and
how they might be used in a cognitive model that
could even begin to think [6][7][8]. One key problem
is enabling the system to autonomously form its own
concept structures from the information that is
presented. This is particularly difficult if the
information is unstructured, for example, individual
concept values being presented in unstructured
groups. This paper suggests an addition to the
current model that would allow it to filter the
unstructured information to form higher-level
concept chains that would represent something in the
real world. The new architecture also starts to
resemble a traditional feedforward neural network,
suggesting what future directions the research might
take. This extended version of the paper includes
results from some clustering tests, considers
applications for the model and takes a closer look
at the intelligence side of things.
|
378-386 |
An Autonomous Fuzzy-controlled
Indoor Mobile Robot for Path Following and Obstacle
Avoidance
Mousa T. AL-Akhras, Mohammad O. Salameh, Maha K.
Saadeh, Mohammed A. ALAwairdhi
Abstract:
This paper provides the design and implementation
details of an autonomous, battery-powered robot that
is based on Fuzzy Logic. The robot is able to follow
a pre-defined path and to avoid obstacles, after
avoiding an obstacle, the robot returns back to the
path. The proposed system is divided into two main
modules for path following and for obstacle
avoidance and line search. Path following controller
is responsible for following a pre-defined path,
when an obstacle is detected, obstacle avoidance and
line search controller is called to avoid the
obstacle and then to return to the path. When the
robot finds the path again, path follower controller
is called. LEGO Mindstorms NXT robot was used to
realise the proposed design. The detailed design
steps of the robot are provided for readers who are
interested in replicating the design. For the
implementation of the path following, Fuzzy Logic
was employed. Fuzzy Logic controller takes the
robot's light and ultrasonic sensory readings as
input and sends commands to the robot motors to
control the robot's speed and direction. An
extensive set of experiments considering both simple
and complicated scenarios for path following and
obstacle avoidance were conducted and the results
proved the effectiveness of the system. Images of
such scenarios are provided for reference and many
videos were uploaded and the links are given in the
paper for interested readers.
|
387-395 |
B2B Process Integration using
Service Oriented Architecture through Web Services
Adrian Besimi
Abstract:
The electronic commerce of B2B for Small and Medium
Enterprises is experiencing obstacles due to the
nature of the processes involved. SMEs have
different barriers to enter the B2B market due to
lack of understanding, lack of finances and lack of
IT experts that can create customized applications
for standardized B2B ecommerce, such as ebXML. They
have difficulties choosing the appropriate channel
of communicating B2B messages, whether the public
e-marketplace or own private Web Services. COTS
software costs a lot, and this paper proposes a
solution based on the ebXML framework and Web
Services as a middleware that will do most of the
job for these companies. It will offer private Web
Services for the public e-marketplace usage as well.
This Service Oriented Architecture can be further
used by external partners in order to integrate
their B2B processes in their own Enterprise Systems.
|
396-403 |
Solving the Protein Folding
Problem Using a Distributed Q-Learning Approach
Gabriela Czibula, Maria-Iuliana Bocicor,
Istvan-Gergely Czibula
Abstract:
The determination of the three-dimensional structure
of a protein, the so called protein folding problem,
using the linear sequence of amino acids is one of
the greatest challenges of bioinformatics, being an
important research direction due to its numerous
applications in medicine (drug design, disease
prediction) and genetic engineering (cell modelling,
modification and improvement of the functions of
certain proteins). We are introducing in this paper
a distributed reinforcement learning based approach
for solving the bidimensional protein folding
problem, an NP-complete problem that refers to
predicting the bidimensional structure of a protein
from its amino acid sequence. Our model is based on
a distributed Q ? learning approach. The
experimental evaluation of the proposed system has
provided encouraging results, indicating the
potential of our proposal. The advantages and
drawbacks of the proposed approach are also
emphasized.
|
404-413 |
Data Loss Prevention for
Confidential Web Contents and Security Evaluation
with BAN Logic
Yasuhiro Kirihata, Yoshiki Sameshima, Takashi
Onoyama, Norihisa Komoda
Abstract:
Since the enforcement of the Private Information
Protection Law of Japan, protection of confidential
information is one of the significant issues in
enterprises and organizations. However, many
incidents of confidential information leakage occur
and this becomes a serious issue in the industrial
society. There is no effective countermeasure to
prevent it so far. In this paper, we propose a web
content protection system to realize the protection
of confidential web contents. The system provides
special viewer application to view the encrypted
content data and realize the prohibition of copying
and taking snapshots for the displayed confidential
data. Adopting the dynamical encryption methodology
by the intermediate encryption proxy, it is possible
to protect the web contents generated dynamically by
web applications. Applying our approach to the
conventional web system, system administrators can
manage the distribution of the confidential
information and prevent them from being leaked out
from the office. We describe the system architecture
and implementation details. We also evaluate the
security of the system implementation and the
internal authentication protocol with BAN logic.
|
414-422 |
Formal Verification of Embedded
Software based on Software Compliance Properties and
Explicit Use of Time
Miroslav Popovic, Ilija Basicevic
Abstract:
The complexity of embedded software running in
modern distributed large-scale systems is going so
high that it becomes hardly manageable by humans.
Formal methods and the supporting tools are offering
effective means for mastering complexity, and
therefore they are remaining to be an important
subject of intensive research and development in
both industry and academia. This paper makes a
contribution to the overall R&D efforts in the area
by proposing a method, and supporting tools, for
formal verification of a class of embedded software,
which may be modeled as a collection of distributed
finite state machines. The method is based on the
model checking of certain properties of embedded
software models by Cadence SMV tool. These
properties are systematically derived from the
compliance test suites normally defined by relevant
standards for compliance software testing, and
therefore we refer to them as the compliance
software properties. Another specificity of our
approach is that we enable explicit usage of time
within the software properties being verified, which
gives more expressiveness to these properties and
bring them more close to system properties that are
analyzed in other engineering disciplines. The
supporting tools enable generation of these models
from the high-level design models and/or from the
target source code, for example in C/C++ language.
We demonstrate the usability of the proposed method
on a case study. The subject of the case study is
formal verification of distributed embedded software
actually used in real telephone switches and call
centers.
|
423-430 |
Using UML Diagrams for Object
Oriented Implementation of an Interactive Software
for Studying the Circle
A. Iordan, M. Panoiu, I. Muscalagiu, R. Rob
Abstract:
This paper presents the necessary steps required for
Object Oriented Implementation of a computer system
used in the study of circle. The modeling of the
system is achieved through specific UML diagrams
representing the stages of analysis, design and
implementation, the system thus being described in a
clear and concise manner. The software is very
useful to both students and teachers because the
mathematics, especially geometry, is difficult to
understand for most students.
|
431-439 |
A Novel Approach to Analyzing
Natural Child Body Gestures using Dominant Image
Level Technique (DIL)
Mahmoud Z. Iskandarani
Abstract:
A novel approach to child body gesture is presented
and discussed. The developed technique allows the
monitoring and analysis of a child's behavior based
on correlation between head, hand, and body poses.
The DIL technique produces several organized maps
resulting from image conversion and pixel
redistribution, hence lumping individual gestures of
the child and results in computable matrices, which
is fed to an intelligent analysis system. The
obtained results proved the technique to be capable
of classifying child presented body pose with
ability to model child's body gesture under various
conditions.
|
440-448 |
Paper Title, Authors, Abstract (Issue 4, Volume 5,
2011) |
Pages |
A Design Model for 3D Desktop
Virtual Environments
B. E. Zayas-Perez, J. Vazquez-Bustos
Abstract:
A model for designing desktop virtual environments
is presented. The model considers principles of
user-centred design and leaner-centred design to
integrate a design process that better meets the
needs of the learner. The design process is
illustrated in the context of prototyping a virtual
environment for teaching safety information. Main
stages of the design process are described in detail
and practical recommendations are given. Design
implications derived from evaluation are described
in terms of virtual environments usability issues.
Results suggest that principles of user-centred
design and learner-centred design should be
considered as complementary paradigms in order to
design usable applications for learning.
|
449-459 |
A Modified Discrete Particle
Swarm Optimization for Solving Flash Floods
Evacuation Operation
Marina Yusoff, Junaidah Ariffin, Azlinah Mohamed
Abstract:
Evacuation operation, which is a process of
evacuating residents from any dangerous sites to
safer destination in the shortest possible time, is
of prime importance in emergency management.
Untimely assistance and poor coordination at the
operation level have always been the major problem
in evacuation process during flash floods. This
paper focuses on evacuation vehicle routing solution
using a modification of a discrete particle swarm
optimization (DPSO) with a new search decomposition
procedure. Comparative analysis of this algorithm
and a genetic algorithm (GA) using the severe flash
floods events datasets is performed. The findings
indicate that the DPSO provides better performance
in both solution quality and processing time.
Further experimental analysis for a large evacuation
dataset can be considered to confirm the performance
of a modified DPSO.
|
460-467 |
Assessment of Multi-Spectral
Vegetation Indices Using Remote Sensing and Grid
Computing
C. Serban (Gherghina), C. Maftei, C. Filip
Abstract:
A primary goal of many remote sensing projects is to
characterize the type, size and condition of
vegetation present within a region. By combining
data from two or more spectral bands we obtain what
is commonly known as a vegetation index (VI), which
enhances the vegetation signal, while minimizing
solar irradiance and soil background effects. This
study addresses the computation of Normalized
Difference Vegetation Index (NDVI), Ratio Vegetation
Index (RVI), Enhanced Vegetation Index (EVI),
Atmospherically Resistant Vegetation Index (ARVI),
Normalized Difference Snow Index (NDSI) and
Normalized Burn Ratio (NBR) based on satellite
imagery. As the analysis is performed on large data
sets, we used Grid Computing to implement a service
for using on Computational Grids with a Web-based
client interface, which will be greatly useful and
convenient for those who are studying the growth and
vigor of green vegetation by using environmental
remote sensing, and have typical workstations, with
no special computing and storing resources for
computationally intensive satellite image processing
and no license for a commercial image processing
tool.
|
468-475 |
Data Transmission and
Representation Solutions for Wind Power Plants’
Management Systems
Adela Bara, Anda Velicanu, Iuliana Botha, Simona
Vasilica Oprea
Abstract:
This paper presents the data level’s solutions as
the base level of a decision support system that can
be used in the National Power Grid. In this paper we
considered spatial data, XML and object-relational
databases for storing data and representing the
locations and the coordinates of the wind power
plants installed. We detailed the transmission and
storing solutions for GIS. Also, we presented the
general architecture of the DSS prototype, which can
be used to analyze the current production and also
the prediction of the wind energy in the wind power
plants locations.
|
476-484 |
Solutions for Analyzing CRM
Systems - Data Mining Algorithms
Adela Tudor, Adela Bara, Iuliana Botha
Abstract:
The main goal of the paper is to illustrate the
importance of the optimization methods used in data
mining process, as well as the specific predictive
models and how they work in this field. Also, the
customer relationship management systems have been
developed lately, offering new opportunities for a
strong profitable relation between a business and
clients.
|
485-493 |
Supporting e-Science
Applications through the On-Demand Execution of
Opportunistic and Customizable Virtual Clusters
Harold Castro, Mario Villamizar, Eduardo Rosales
Abstract:
This paper deals with the design and implementation
of a virtual opportunistic grid infrastructure that
allows taking advantage of the idle processing
capabilities currently available in the computer
labs of a university campus, ensuring local users to
have priority in accessing the computational
resources, while simultaneously, a virtual cluster
takes the resources unused by them. A virtualization
strategy is proposed to allow the deployment of
opportunistic virtual clusters which integration
provides a scalable grid solution capable of
supplying the high performance computing (HPC) needs
required for the development of e-Science projects.
The proposed solution was implemented and tested
through the execution of opportunistic virtual
clusters with customized application environments
for projects of different scientific disciplines,
evidencing high efficiency in result generation.
|
494-504 |
Similarities Between String
Grammars and Graph Grammars
Silviu Razvan Dumitrescu
Abstract:
In this paper, we present some studies about
relations existing between well known Chomsky string
grammars and graph grammars, in particularly
hypergraph grammars. We are discussing about
deterministic context free Lindenmayer Systems used
to describe commands to a device that generates
black and white digital images. Instead of
well-known methods of drawing, we will paint
squares, not lines. After that, we give some
important properties of growth functions of
D0L-systems. In addition, we turn the discussion to
gray scale or color digital image generation. The
second main part of the paper is about normal forms
of hyperedge replacement grammars. In context
freeness of these grammars, we can transform each of
it into an equivalent grammar without λ-productions
and without rewritings. After that, in a
nondeterministic way, we will create equivalent
grammars in Chomsky Normal Form or Greibach Normal
Form. Both normal forms are inspired by string
grammars. In the third part of this paper we
illustrate some important differences between graph
grammars and hypergraph grammars in context of
freeness. On the other hand, we give a possibility
to transform the planar structure of a hypergraph
into a linear one with concern of determinism. This
can create a path to transform a pushdown automaton
into a generative grammar equivalent.
|
505-512 |
Comparative Study of Adaptive
Network-Based Fuzzy Inference System (ANFIS),
k-Nearest Neighbors (k-NN) and Fuzzy c-Means (FCM)
for Brain Abnormalities Segmentation
Noor Elaiza Abdul Khalid, Shafaf Ibrahim, Mazani
Manaf
Abstract:
Complexity of medical imagery is found as
challenging problem in segmentation. This paper
conducts a comparative study of Adaptive
Network-Based Fuzzy Inference System (ANFIS),
k-Nearest Neighbors (k-NN) and Fuzzy c-Means (FCM)
for brain abnormalities segmentation. The
characteristics for each brain component of
“membrane”, “ventricles”, “light abnormality” and
“dark abnormality” is analyzed by extracting the
minimum, maximum and mean grey level pixel values.
The segmentation performances of each technique is
tested to hundred and fifty controlled testing data
which designed by cutting various shapes and size of
various abnormalities and pasting it onto normal
brain tissues. The tissues are divided into three
categories of “low”, “medium” and “high” based on
the grey level pixel value intensities. The
segmentation of light abnormalities outperformed the
dark abnormalities. It was proven that the ANFIS
returns the best segmentation performances in light
abnormalities, whereas the k-NN conversely presented
well in dark abnormalities segmentation.
|
513-524 |
A Genetic Algorithm Approach to
Improve Automated Music Composition
Nathan Fortier, Michele Van Dyne
Abstract:
Using the rules of music theory, a program was
written which automatically creates original
compositions. These compositions were parameterized
by user input concerning preferences on genre,
tempo, and tonality. Based on these preferences,
initial compositions were generated, and the “best”
composition was presented to the user. Following the
rules of music theory guarantees that the program
produces harmonious compositions, but certain
aspects of musical composition cannot be defined by
music theory. It is in these aspects of musical
composition where the human mind uses creativity.
Using the population of compositions initially
generated for the user, the program then used a
genetic algorithm to evolve compositions that
increasingly match the user’s preferences, allowing
the program to make decisions that cannot be made
using music theory alone. The resulting “best”
composition of the evolved population was then
presented to the user for evaluation. To test the
effectiveness of this approach, each composition,
both initial and final was ranked by subjects on a
scale from 1 to 10. Subjects expressed a significant
preference for the evolved compositions over initial
compositions.
|
525-532 |
Flexible Views in Visualizing
Multiple Response Survey Using Murvis
Siti Z. Z. Abidin, M. Bakri. C. Haron, Zamalia
Mahmud
Abstract:
In a survey investigation, subjects will normally
provide only one answer for each question. However,
when they are allowed to provide more than one
answer, a specific technique is needed to represent
the observed data in a visual form that could be
easily seen and viewed in terms of its similarity
and dissimilarity among the survey attributes. One
of the common techniques used in visualizing the
results is by using multidimensional scaling (MDS).
MDS is often used to provide a visual representation
of the pattern of proximities (i.e., similarities or
distances) among a set of objects that allow results
to be interpreted according to the survey subjects
and attributes. However, too many subjects and
attributes will produce massive output points
(coordinates). In order to enhance the
visualization, a tool called Murvis (Multiple
Response Visualization) has been developed using
Java programming language to provide users with the
flexibiliy in visualizing the MDS output coordinates
in 2D and 3D space. Murvis allows users to add
colors to the graphic visual and present the output
in many different and flexible views. With the
latter, analysts of multiple response survey are
able to illustrate a more informative research
findings. A small scale survey involving three data
sets are used to test the usability and
effectiveness of the tool which has some impending
significances.
|
533-541 |
Comparison of Histogram
Thresholding Methods for Ultrasound Appendix Image
Extraction
Milton Wider, Yin M. Myint, Eko Supriyanto
Abstract:
Ultrasound image can facilitate the physician to
identify the cause of an enlarged abdominal organ.
This paper presents the attempt to diagnose the
appendicitis by extracting appendix from abdominal
ultrasound image. Histogram thresholding methods are
compared for the appendix extraction. Moreover, the
performance changes of appendix extraction methods
in according with the position of scanning probe are
presented. In order to segment out the appendix from
ultrasound image, this paper discusses the
comparative results of three thresholding
segmentation methods. From this comparison it can be
clearly seen that the proposed method is the most
appropriate method for appendix image segmentation.
When analyzed the extracted appendix image, it can
be concluded that the normal probe view is the best
transducer position.
|
542-549 |
|
|
Copyrighted Material, www.naun.org
NAUN
|