International Journal of Computers
E-ISSN: 1998-4308
Volume 8, 2014
Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of NAUN Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.
Submit a paper | Submission terms | Paper format
Volume 8, 2014
Title of the Paper: Index Picture Selection for Automatically Divided Video Segments
Authors: Gábor Szűcs
Pages: 183-192
Abstract: The methods described in this paper are capable of analyzing and processing videos without any meta-information to divide continuous videos into segments without human interaction, and to select index pictures from segments. In the automatic video segmentation procedure pictures are sampled, the differences between them are measured, and 1-dimensional clustering – as contribution of the paper – is used to filter out the non-adequate segment border candidates in the segmentation. Further contribution of the paper is the construction of two factors of similarity among pictures, the common similarity indicator taking the semantic information into consideration, and the last step of the index picture selection: the adaptation of k-means++ clustering for the similarity-based picture set, where the distances among images are not available.
Title of the Paper: Emerging Technology Curriculum on the Focus of Cloud Computing
Authors: Hsieh-Hua Yang, Lung-Hsing Kuo, Wen-Chen Hu, Hung-Jen Yang
Pages: 172-182
Abstract: The purpose of this study was to design a curriculum for learning emerging technology on the focus of cloud computing at the high-school level. Based upon theories of technology education, the emerging technology content was selected and organized. The technology universal theory was applied in content selecting and the technological method model was applied to design learning activity. Based upon professional committee panel review, the content was verified. There are two chapters and nine sessions in the integrating learning materials. The procedure of developing and evaluating emerging technology curriculum were concluded based upon both theoretical and field evidences.
Title of the Paper: Cloud-Based Remote Laboratory Supported by RTAI
Authors: Zoltán Janík, Katarína Žáková
Pages: 166-171
Abstract: The paper presents an online remote laboratory environment that utilizes the cloud computing model. The core of the remote laboratory is based on the real-time server using the features of the open-source RTAI project (Real-Time Application Interface). We have enhanced the possibilities of RTAI-based systems and developed a new unified interface for remote access to such systems using the philosophy of the Platform as a Service and the Software as a service cloud computing models. The described solution enables users to design custom schemes and compile those remotely using provided web services. Thus, the new functionality creates an environment that allows not only the remote execution of pre-defined real-time tasks like in standard SaaS clouds, but it offers the ability to create custom tasks remotely as well, following the principles of the PaaS model. Simple integration into existing web applications is ensured by using WSDL (Web Services Description Language) and SOAP (Simple Object Access Protocol) technologies.
Title of the Paper: Visual Knowledge Mining and Utilization in the Inductive Expert System
Authors: Nittaya Kerdprasop, Kittisak Kerdprasop
Pages: 157-165
Abstract: Advances in computer graphics and the human-machine visual systems have made visualization become an important tool in current data exploration and analysis tasks. Visual data mining is the combination of visualization and data mining algorithms in such a way that users can explore their data and extract the models in an interactive way. Existing visual data mining tools allow users to interactively control the three main steps of data mining: input data, explore data distribution, and extract patterns or models from data. In this paper, we propose a framework to extend these visually controlled steps to the level of model deployment. We demonstrate in this paper that both model induction and model deployment can be done through the visual method using the KNIME and Win-Prolog tools for knowledge acquisition and knowledge deployment, respectively. Model deployment presented in this paper is the utilization of induced data model as an inductive knowledge source for the inductive expert system, which is the next generation of knowledge base systems that integrate automatic learning ability in their knowledge acquisition part.
Title of the Paper: A Study of Elementary Students’ Controlling on Leap Motion
Authors: Yang-Ting Chen, Miao-Kuei Ho, Hung-Jen Yang
Pages: 144-156
Abstract: The purpose of this study was to verify the feasibility of using Gesture control on computer free- hand drawing in an educational environment for elementary students. A gesture control device, Leap Motion was selected for this study. Some API reference of Leap motion would be introduced such as Gesture, Controller, frame and fingers. A field hands-on experiment environment was established for elementary students. The procedures of the hands-on experiment were 1).introducing the hardware and software used in the experiment, 2).video recording students' hand-movement of using Gesture control device and drawing results on the computer, 3) recoding the operating time for statistical analysis. The experimental results and statistical evidence suggested the gesture control could be handled by elementary in free-hand drawing. The average operating times and operating stability were also concluded.
Title of the Paper: Modeling Filled Pauses and Silences for Responses of a Spoken Dialogue System
Authors: Kengo Ohta, Norihide Kitaoka, Seiichi Nakagawa
Pages: 136-143
Abstract: In human-to-human dialogue, pauses such as filled pauses and silences play an important role not only as markers of discourse structures[1] but also as cues to subsequent phrases[2]. As shown in these previous literatures, the modeling of filled pause[20][21] is essential not only in implementation of speech recognition system for spontaneous speech[22][23] but also in implementation of natural spoken dialogue system, considering the effects of these phenomena upon users. In this paper, we propose the modeling of filled pauses and silences in response utterances of spoken dialogue systems. At first, the positions of pauses are investigated in corpus study in dialogue data and presentation data of the Corpus of Spontaneous Japanese (CSJ). Based on this investigation, pauses are modeled and inserted into response utterances. Our proposed method is evaluated in subjective experiments of a tourist-guiding task. We compared user comprehension, naturalness and listenability of the system’s responses with and without filled pauses and silences. Our results showed that the filled pause positioned at the inter-sentence level can enhance the user comprehension and improve the naturalness of a spoken dialogue system.
Title of the Paper: Mining Web and Social Networks for Consumer Attitudes towards Government-Owned Croatian National Airline
Authors: Hrvoje Jakopović, Nives Mikelić Preradović
Pages: 128-135
Abstract: The paper gives a critical insight into process of evaluation in public relations and also points out similarities and differences among contemporary PR and marketing. The aim was to examine the applicability of sentiment analysis to image measurement in a case of Croatian national airline Croatia Airlines (CA) and evaluate company's public relations efforts. The authors observed Croatia Airlines during the reconstruction phase, after several years which company marked with strikes and financial problems. The analysis showed that company has mostly positive image among customers and in new media. Company's Facebook page was analyzed with a goal to determine PR efforts. Thus, results indicated that the page is mainly used for one-way communication. This showed how efficient public relations were and where the company has place for improvement. The costumer opinions and attitudes were revealed as very valuable information for company's strategic approach in building its image.
Title of the Paper: Effect of Differences of Programming Languages on Information Security Software Quality
Authors: Hyungsub Kim, Sanggu Byun, Seokha Koh
Pages: 120-127
Abstract: Online information infringement has been increasingly diversified. As such infringement attacks have grown diversified, no single software written solely by one programming language is able to defend every attack completely. Although most developers use one language for several years but only few understand the language to a full extent. It is deemed that information protecting softwares can be improved with the advantages of object-oriented languages. Many South Korean companies producing many information security protection products, however, use C for software development. The most frequently utilized computer language is Java but when it comes to information security, especially, C is mostly employed. Some parts of information security softwares are better when dealt with Java. In this recognition, the present research seeks to compare the characteristics of OSO/ICE 9126, CC and information security softwares to come up with a more appropriate programming language.
Title of the Paper: UniVis - A 3D Software System Visualization Using Natural Metaphors
Authors: Dimitar Ivanov, Milena Lazarova, Haralambi Haralambiev, Delyan Lilov
Pages: 107-119
Abstract: The development of large software system frequently involves team participants with varying number and different level of competence. The understanding of the system structure and functionalities depends not only on the preliminary knowledge for the data, but also on its representation. The following paper presents UniVis - software visualization tool aiming to facilitate the processes of orientation and comprehension of complex software systems. By using natural and familiar metaphors, UniVis ensures the understandability of the visualized software system. The integrated navigation approaches attempt to ensure natural mechanism for manipulation of the result visualization by combining techniques for interaction with alternative effects on the visualization elements. The end product of the applied approaches is aesthetically appealing software visualization providing visually accessible amount of knowledge for the presented system.
Title of the Paper: Considering Qualitative and Quantitative Factors to Select Information Security Countermeasures
Authors: Cheol Hwan Jang, Tae-Sung Kim
Pages: 99-106
Abstract: The Threat of information security breaches is increasing. Large organizations have been targeted and have lost confidential customer information. Organizations have recognized the importance of information security investments. However, many organizations lack adequate investment in information security. In this paper, we derive the factors that affect investment in information security, provide a research model in accordance with information security decision factors and analyze the selecting priority of information security countermeasures using AHP decision model. According to the findings of this study, qualitative investment decision factors presented higher significance relatively and regulatory represented a relatively higher weight in the information security investment decision factors.
Title of the Paper: Cloud Image Processing and Analysis Based Flatfoot Classification Method
Authors: Ming-Shen Jian, Jun-Hong Shen, Yu-Chih Chen, Chao-Chun Chang, Yi-Chi Fang, Ci-Cheng Chen, Wei-Han Chen
Pages: 90-98
Abstract: In this paper, the Cloud Image Processing and Analysis based Flatfoot Classification method that help doctors determine the flat feet is proposed. Via using image processing and analysis established on different virtual machines on cloud, the proposed method can remove noise and shape the images of the foot based on X-ray picture. The individual X-ray image after image processing is divided into four blocks according to the proposed division method which considering the percentage of each foot. By dividing the original image into four individual sub-partition of image, each divided image can be delivered to different analysis algorithms for key-point finding. Each image can be processed based on individual virtual machine on cloud. According to the proposed algorithms implemented on cloud for individual sub-partition of original image, the system can find four decision points of each block. Based on the integration of processing results from different algorithms, the system can automatically identify flat feet. Furthermore, the information and identification results can be provided to the doctor for further manual identification. In addition, the decision point can be also manually selected. In other words, according to the selection made by the doctor, the system can make the results more accurately and objectively. The simulation presents that the accuracy can be enhanced based on the dpi of the X-ray picture. Moreover, different methods used for decision points finding provide different performance.
Title of the Paper: Automatic Coin Classification
Authors: Stefan N. Tică, Costin A. Boiangiu, Andrei Tigora
Pages: 82-89
Abstract: An automatic system which classifies coins is presented and discussed. The system is flexible, being able to identify coins with various appearances and photographed in different light conditions. For this purpose, a set of robust techniques for thresholding, edge detection and frequency transform were employed in order to generate a fingerprint as significant as possible and as invariant as possible for every coin class.
Title of the Paper: Spatiotemporal Data Model for Web GIS
Authors: J. Konopásek, O. Gojda, D. Klimešová
Pages: 76-81
Abstract: Current Internet standards provide us many options for processing and manipulation with spatial data but difficulties occur when temporal data are needed to be applied. Traditional relational databases widely used on the Internet are not designed to store temporal data and this fact limits us in developing more enhanced geographic models. This article is comparing several data models and approaches for managing temporal data, their suitability especially for practical usage, e.g. performing difficult queries, analyzing managed data as well as supporting construction of specific data formats needed by different web applications.
Title of the Paper: Continuous Features in Inductive Learning and the Effect of RULES Family
Authors: Hebah ElGibreen, Mehmet Sabih Aksoy
Pages: 66-75
Abstract: In information system, researchers are usually concerned with understanding the systems. However, due to the excessive growth of computer technologies, handling of large data became a challenge. Thus, simple prediction algorithms can be more helpful than the difficult statistical approaches. Specifically, Inductive Learning can be used to accomplish difficult problem using simple rules or trees. In this field of Machine Learning, methods have been divided into two types: Decision Tree and Covering Algorithms. However, current researchers are starting to focus more on Covering Algorithm due to its outstanding properties and the simplicity of its results. Specifically, one family called RULES was found to be very interesting and its properties seemed appealing. It was found that RULES is one of the most flexible and simplest families and it has high learning rate. Nevertheless, even though RULES is actively improved but it was surprisingly neglected in the conducted surveys, especially with numerical datasets. Yet, complex and real-life problems always contain numerical features. Thus, the purpose of this paper is to extend the Inductive Learning literature and investigate the problem of continuous attributes in RULES and other Inductive Learning families. A theoretical analysis will be conducted to show the effect of numerical values and how it is still an open research area. In addition, an empirical evaluation will also be conducted to show how RULES family can be used as the base of further improvement. Accordingly, this paper can be used as a reference by recent researchers to know what research area is still not covered and need further refinement in Inductive Learning, especially in complex problems that contain numerical values.
Title of the Paper: A Study of an Emerging Input Device Using Behavior
Authors: Lung-Hsing Kuo, Li-Ming Chen, Hung-Jen Yang, Miao-Kuei Ho, Hsueh-Chih Lin
Pages: 56-65
Abstract: The purpose of this study was to assessing elementary students with the using of a new computer input device. The computer input device is important because of its function as a vehicle transfer users commands or wills to computer. Whenever a new device came into market, there is a need to explore its feasibility in educational usage so can be considering applications in educational fields. This study mainly focused on the input device called “Leap motion” and based upon the theory of planned behavior, TPB, the investigation of 24 elementary students was conducted to assessing their technology device using behavior. The TPB model provides a framework for understanding and predicting behavior in specific context and offered a useful platform for exploring device using intentions toward applying this new device in educational computing for elementary students.
Title of the Paper: Study of Software Implementation for Linear Feedback Shift Register Based on 8th Degree Irreducible Polynomials
Authors: Mirella A. Mioc, Mircea Stratulat
Pages: 46-55
Abstract: The Linear Feedback Shift Register is the simplest kind of feedback shift register. Based on the simple feedback sequences a large body of mathematical theory can be applied to analyzing LFSRs. A LFSR generates a random sequence of bits because it depends on the output feedback to the XOR gate. This property leads to generate pseudo-noise and pseudo-random number sequences and so LFSR are used in cryptography in data encryption and data compression circuits and also in communication and in error correction circuits. During the time a main problem was the speed. So, many research were develop in the frame of choosing the proper polynomial. This paper present an analysis for the 8th degree Irreducible Polynomials from the point of view of time. The conclusion of this experiment is that almost all obtained results are in the same time distribution.
Title of the Paper: Crime Data Analysis Using Data Mining Techniques to Improve Crimes Prevention
Authors: Zakaria Suliman Zubi, Ayman Altaher Mahmmud
Pages: 39-45
Abstract: This paper presents a proposed model for crime and criminal data analyzes using simple k-means algorithm for data clustering and Aprior algorithm for data Association rules. The paper tends to help specialist in discovering patterns and trends, making forecasts, finding relationships and possible explanations, mapping criminal networks and indentifying possible suspects. Clustering is based on finding relationships between different Crime and Criminal attributes having some previously unknown common characteristics. Association rules mining is based on generate rules from crime dataset based on frequents occurrence of patterns to help the decision makers of our security society to make a prevention action. The data was collected manually from some police department in Libya. This work aims to help the Libyan government to make a strategically decision regarding prevention the increasing of the high crime rate these days. Data for both crimes and criminals were collected from police departments’ dataset to create and test the proposed model, and then these data were preprocessed to get clean and accurate data using different preprocessing techniques (cleaning, missing values and removing inconsistency). The preprocessed data were used to find out different crime and criminal trends and behaviors, and crimes and criminals were grouped into clusters according to their important attributes. WEKA mining software and Microsoft Excel were used to analyze the given data.
Title of the Paper: Extended System of Honeypots to Detect Threats
Authors: Roman Jasek, Martin Kolarik, Tomas Vymola
Pages: 33-38
Abstract: Recently emerged threat type of Advanced Persistent Threats (APTs). APTs continuously gather information and data on specific targets, using various attack techniques examine the vulnerabilities of the target and then perform the data obtained by hacking. APTs are very precise and intelligent. Perform specific attacks on specific targets, and so differs from traditional forms of hacking. APT is precisely focused on specific targets, according to the knowledge of the environment and selects appropriate types of attacks. Therefore, it is very difficult to detect APT attacks. This article describes the methods and procedures APT attacks, analyzed and proposes solutions to detect these threats using honeypots system. In the second part of the paper discussed two possible solutions using classical detection system, honeypots and its modifications. The final section is conducted an experiment that compares the efficacy of these two variants.
Title of the Paper: Agent Based Data Distribution for Parallel Association Rule Mining
Authors: Kamal Ali Albashiri
Pages: 24-32
Abstract: A Multi-Agent based approach to Data Mining using a Multi-Agent System (MADM) is described. The system comprises a collection of agents cooperating to address given Data Mining (DM) tasks. The exploration of the system is conducted by considering a specific parallel/distributed Association Rule Mining (ARM) scenario, namely data (vertical/horizontal) partitioning to achieve parallel/distributed ARM. To facilitate the partitioning a compressed set enumeration tree data structure (the T-tree) is used together with an associated ARM algorithm (Apriori-T). The aim of the scenario is to demonstrate that the MADM approach is capable of exploiting the benefits of parallel computing; particularly parallel query processing and parallel data accessing. Both of the data (vertical/horizontal) partitioning techniques are evaluated and compared. Comparison of the measures indicates that the data partitioning methods described are extremely effective in limiting the maximal memory requirements of the algorithms, while their execution time scale only slowly and linearly with increasing data dimensions.
Title of the Paper: Automated Unattended Installation in Kovárna Viva, a.s.
Authors: Lukas Kralik
Pages: 17-23
Abstract: This paper was created on the basis of the design, development and implementation of the project of Unattended installation for company Kovárna VIVA, a.s. This article briefly describes the project and the field automated unattended installations. In the introduction the reader is informed about the project and offline installation issues in general. Furthermore, this paper focuses only on projects where there are necessary analyses for the deployment of automated unattended installations. In conclusion, there is sample of source code of the program (batch file) that automates the entire process.
Title of the Paper: Importance of Surface Methods in Human and Automatic Text Summarization
Authors: Nives Mikelić Preradović, Damir Boras, Marta Vlainić
Pages: 9-16
Abstract: Both human and automatic summaries enable a concise display of the most important information from the original text. Summaries written by the author of the document, expert in the field, professional summarizer or generated by the automatic summarization system use the same shallow feature of the text (such as word frequency or location) to create a high-quality summary. In this paper, we describe these features and compare summary written by human with a summary created by automatic text summarization systems: Microsoft Word, SweSum, SHVOONG and Online Brevity Document Summarizer. Research results show that although all these automatic summarizers rely heavily and only on the shallow features of the text, they all generate informative extracts satisfying quality expectations of the human users.
Title of the Paper: Application of Digital Signature for Verification of the Examination Results
Authors: Lucie Pivnickova, Viliam Dolinay, Vladimir Vasek, Roman Jasek
Pages: 1-8
Abstract: This paper presents the application of the digital signature into the medical software applications, which work with the private data of the examination patients. Digital signature is the law recognized alternative to physical signature, intended for use in an electronic environment. At present, in most states which legalized a digital signature is used the signature in conjunction with the standard X. 509, which defines the format of certificates, organization and conduct of certification authorities. Certification Authority provides the trusted connection of people and public key to use for digital signature. At first the paper will explain the basic concepts associated with the digital signature. Then prepared tool to create digitally signed document will be introduced. Finally, the use of this tool will be demonstrated on the example and pointed possibility of its use in practical situations.