International Journal of Computers

 
E-ISSN: 1998-4308
Volume 12, 2018

Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of NAUN Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.

Main Page

Submit a paper | Submission terms | Paper format

 


Volume 12, 2018


Title of the Paper: Cyber-Physical System Modeling based on iGMDH Algorithm

 

Authors: Shengbin Ren, Fei Huang

Pages: 111-118

Abstract: Aiming at the problem of error and error accumulation caused by data imprecision and uncertainty in CPS modeling , this paper proposes an iGMDH algorithm based on the GMDH and the idea of interval analysis. Firstly, the contractor is introduced to improve the SIVIA algorithm. The algorithm solves the problem of large amount of computation, long time consuming and deadlock in the SIVIA algorithm performs dichotomous search. The input and calculation of GMDH are transformed into interval number and interval operation, and the model parameters are estimated by using the improved SIVIA algorithm. Finally, the midpoint of the interval parameter is taken as the point estimation of the parameter to be estimated, and then the intermediate model is filtered by using the external criterion to establish the final system model. The experiment shows that the iGMDH algorithm can significantly improve the accuracy and noise immunity compared with the original algorithm, and effectively solve the problem of error and error accumulation in CPS modeling.


Title of the Paper: Prototype of Security System Based on Multi-Agent Architecture

 

Authors: G. Tsochev, R. Trifonov, G. Popov, R. Yoshinov, Sl. Manolov, G. Pavlova

Pages: 105-110

Abstract: The technology of intrusion detection systems in computer networks is quite young and dynamic. Today in this area there is an active formation of the market. Currently, intrusion detection systems (IDS) are becoming increasingly common among companies of various sizes. However, unfortunately, these systems, designed to identify and repel attacks by hackers, can themselves be exposed to unauthorized influences that could disrupt the system's performance, which would prevent it from fulfilling its tasks. The present paper describes some of the results obtained in the Faculty of Computer Systems and Technology at Technical University of Sofia in the implementation of project related to the application of intelligent methods for increasing the security in computer networks. The paper introduces a model for IDS where multi agent systems and artificial intelligence are applicable by the means simple real-time models constructed in laboratory environment.


Title of the Paper: Multi-Layer Networks: Origin, Community Detection, Applications

 

Authors: Babak Farzad, Oksana Pichugina, Liudmyla Koliechkina

Pages: 92-104

Abstract: Communities are common structures in social networks. These structures, typically, are formed by different attributes and consequently have different textures in the network. Standard Community Detection (CD) methods detect and extract them to some degree: they often form a node partition clearly related to a dominant node attribute. Such methods are unable to detect the whole variety of communities in the network. We study CD on multi-attributed affiliation networks that are networks with nodes decorated by a number of attributes and edges forming due to their similarity. The networks are represented as a composition of single node attribute networks called one-layer networks and yield node partitions into the attribute clusters. We believe that these partitions can be detected by standard CD algorithms applied to a network accumulated both structural information and node attributes. We propose an iterative method called Multi-Layer Community Detection Algorithm (MLCDA) including two stages a synthesis phase of utilizing available network data and a decomposition phase in which communities are extracted layer by layer. The synthesis includes the conversion of an original network into a weighted one based on assumptions about the network model, construction of an association network accumulated the node attributes, and synthesis of the networks in an accumulated network. In the decomposition phase, CD is conducted on the accumulated network; for the obtained partition an underlying node attribute is determined; an approximation network for the corresponding one layer network is constructed and extracted from the accumulated network; these steps are repeatedly repeated.


Title of the Paper: Selecting Type of Response for Chat-like Spoken Dialogue Systems Based on Acoustic Features of User Utterances

 

Authors: Kengo Ohta, Ryota Nishimura, Norihide Kitaoka

Pages: 88-91

Abstract: This paper describes a method of automatically selecting types of responses in conversational dialog systems, such as back-channel responses, changing the topic, or expanding the topic, using acoustic features extracted from user utterances. These features include spectral information described by MFCCs and LSPs, pitch information expressed by F0, loudness, etc. A corpus of dialogues between elderly people and an interviewer was constructed, and the results of evaluation experiments showed that our method achieved an F-measure of 49.3% in a speech segment identification task. Moreover, further improvement was achieved by utilizing the delta coefficients of each feature.


Title of the Paper: Internet addiction of the first-year students in a Technical University

 

Authors: S. V. Lavrinenko, P. I. Polikarpov

Pages: 83-87

Abstract: Internet addiction is the most important problem of our time, especially among young people. The article presents the results of investigation of Internet addiction of first-year students of one of Russia's leading technical universities. The study was conducted on the basis of K. Young techniques. According to the results, the problem is very serious. Symptoms of Internet addiction have been found in nearly one-third of students. Only 10 percent of students are ordinary internet users. It is necessary to organize the educational process in the electronic environment more actively and efficiently. As a result, the students while they are at the computer will spend time with benefits for their education, without any negative consequences.


Title of the Paper: Recommendation System Based on Collaborative Filtering for Resources and Educational Materials on the Web

 

Authors: Santiago Zapata, Fernanda Lemunguir S.

Pages: 76-82

Abstract: In this work, a study is made to the different types of recommendation systems, the most used algorithms and metrics are described, as well as the problems that arise at the time of their design. The developed system is a Web site, called LibreriaSR, where a recommendation system based on collaborative filtering is implemented, in which the calculation of similarity between users is done using the Pearson correlation coefficient and using the similarity metric, users are obtained more related to the active user in order to calculate the predictions of the content and make recommendations of books that are likely to interest, guided by the idea that if there were users with similar tastes in the past, it is very likely that they have similar tastes in the future.


Title of the Paper: A New Method to Medical MRI Images Restoration with Swarm Intelligence

 

Authors: Mehdi Zekriyapanah Gashti, Rouhollah Habibey

Pages: 70-75

Abstract: Due to the limited speed of sensors in MRI imaging, the sampling at Nyquist rate will result in elongation of the imaging time. This causes the patient's discomfort, motioninduced geometric deformities, and thus, reduces the image quality. In this study, we provided a new method for reducing the image noise, in which the signal sparse representation was used to restore the degraded and noisy areas. The particle swarm optimization was also used to improve the accuracy of the sparse representation. The simulation results indicated that the proposed method has a higher efficiency than most of the popular noise removal methods both in terms of PSNR (Peak signal-to-noise ratio) parameters, MSE (Mean Square Error) and the image quality. It is also a more powerful approach in retrieving subtleties and details of the image than the most available prominent noise removal methods.


Title of the Paper: Assessment of Priorities in the Analytical Hierarchy Process by Evolutionary Computing

 

Authors: Ludmil Mikhailov

Pages: 66-69

Abstract: The paper investigates the application of evolutionary algorithms (EA) for assessment of priorities in the Analytical Hierarchy Process by solving a two-objective prioritisation problem. We propose two evolutionary computing approaches, based on single-objective and multi-objective EA. Our preliminary results from a Monte-Carlo simulation show that the multi-objective EA outperforms the single-objective solution approach with respect to accuracy and computational efficiency.


Title of the Paper: Data Interoperability Across IoT Domains

 

Authors: Károly Farkas, Zoltán Pödör, Gergely Mezei, Ferenc Somogyi

Pages: 60-65

Abstract: Nowadays the proliferation of IoT (Internet of Things) devices results in heterogeneous and proprietary sensor data formats which makes challenging the processing and interpretation of sensor data across IoT domains. Thus, to achieve syntactic interoperability (the ability to exchange uniformly structured data) and semantic interoperability (the ability to interpret the meaning of data unambiguously) is still an issue under research. In this paper, we introduce and discuss our purpose developed new script language called Language for Sensor Data Description (L4SDD), and the basic principles of our generic, ontology-based approach to achieve cross-domain syntactic and semantic interoperability. Moreover, we illustrate our solutions via a real-life smart parking case study.


Title of the Paper: Proposed Modifications in ITU-T G.729 8 Kbps CS-ACELP Speech Codec and its Overall Comparative Performance Analysis with CELP Based 12.2 Kbps AMR-NB Speech Codec

 

Authors: A. Nikunj Tahilramani, B. Ninad Bhatt

Pages: 54-59

Abstract: This paper proposes the approach of exploiting the use of excitation codebook structure of standard extended G.729 11.8 Kbps [1] having 2 non-zero pulses per track in existing standard 8 Kbps CS-ACELP (80 bits/10 ms) speech codec[1]. Proposed approach avoids the use of two algebraic codebook structure for forward mode as well as for backward mode of G.729E working at 11.8 Kbps with least significant pulse replacement approach for finding optimized excitation codevector. Proposed excitation codebook structure modification in standard 8 Kbps CS-ACELP (80 bits/10 ms) speech codec actuates the bit rate of 11.6 Kbps (116 bits/10 ms) .This paper introduces a comparative analysis between proposed 11.6 Kbps CS-ACELP based speech codec and standard Adaptive multi rate-Narrow band (AMR-NB) 12.2 CELP based speech codec [2]. The comparative analysis shows that results of subjective and objective parameters are quite fair in case of proposed 11.6 Kbps CS-ACELP based speech codec than 12.2 Kbps AMR-NB CELP based speech codec. Here proposed CS-ACELP 11.6 Kbps speech codec is implemented in MATLAB.


Title of the Paper: An Efficient Non-Separable Architecture for Haar Wavelet Transform with Lifting Structure

 

Authors: Serwan A. Bamerni, Ahmed K. Al-Sulaifanie

Pages: 43-53

Abstract: In this paper, a memory efficient, fully integer to integer with parallel architecture for 2-D Haar wavelet with lifting scheme has been proposed. The main problem in most 2-D architecture is the intermediate or internal memory (on-chip), which is mostly proportional to the image width, the increase of the internal memory lead to increases of the die area and control complexity. The proposed non-separable architecture is derived by rearranging and combining the lifting steps which are carried in both vertical and horizontal directions and performing it in simple and single step. In addition to the elimination of internal memory, the proposed algorithm is outperforming the existing architecture in term of hardware utilization, latency, number of arithmetic operation, power consumption, and used area. Finally, the proposed algorithm has been implemented on Xilinx Spartan 3A Development kit.


Title of the Paper: A Novel Approach for Solving Medical Image Segmentation Problems with ACM

 

Authors: Ch. Janardhan, K. V. Ramanaiah, K. Babulu

Pages: 33-42

Abstract: In this paper we proposed a novel segmentation algorithm for medical image segmentation that employs an active contour model (ACM) using level set method. This algorithm takes advantage of local edge feature algorithm for accurately drive the contour to required boundary region. The analysis and detection of any kind of brain tumors from magnetic resonance imaging (MRI) is very important for radiologists and image processing researchers. If objects of interest and their boundaries can be located correctly, meaningful visual information would be provided to the physicians, making the following analysis much easier. Within the numerous image segmentation algorithms, active contour model is widely used with its clear curve for the object. This algorithm measures the alignment between the evolving contour’s normal direction of movement and the image’s gradient in the adjacent region located inside and outside of the evolving contour and also considers the average edge intensity in the adjacent region located inside and outside of the evolving contour. This allows minimizing the negative effect of weak edges on the segmentation accuracy.


Title of the Paper: Identifying the Efficiency of OpenID by Simulation Design

 

Authors: Lung-Hsing Kuo, Fong-Ching Su, Hung-Jen Yang

Pages: 25-32

Abstract: Fruitful on-line learning resources had been promoted since 1998 in Taiwan. The purpose of this study was to identify the OpenID service efficiency of on-line learning servers. The key concept of OpenID is to manage only one ID for user convenience. The account circulation performance should be maintain equally, otherwise, the on-line learning resources supported by different servers might be chosen under biased ways. There were three service selected for evaluation. Those service are supported by the government funding and the ministry of education promotes those sides to our young leaners. By simulating design the HTTP requests generated by multiple simultaneous users, the OpenID service performance under normal loads to collect data for statistical procedure, one-way ANOVA. It was found that there exists significant OpenID efficiency difference among servers. It was concluded that on-line learning entry experience might be twisted, learners might not choose on-line learning resources freely, and on-line resource chosen behavior might be affected by the OpenID performance.


Title of the Paper: Alternatives of Work with Risks Used at Technological Facilities Safety Management

 

Authors: D. Prochazkova, J. Prochazka

Pages: 19-24

Abstract: The safety of technological facilities is based on copping with risks. With regard to world dynamic development it is necessary the priority risks to monitor and to cope with them during time, and also to measure the respective safety. At measure of rate for safety level we use the known experience that the better coping with relevant risks, the higher facility safety level is. The analysis of based publications and data from real practice shows that seven domains at work with risks are important. The paper shows the results of critical judgement of individual techniques that are used at work with risks in technological facilities in practice.


Title of the Paper: An Intelligent Cartographic Generalization Generalization Algorithm Selecting Mode Used in Multi-Scale Spatial Data Updating Process

 

Authors: Junkui Xu, Dong Li, Longfei Cui

Pages: 15-18

Abstract: In multi-scale spatial data updating process, cartographic features vary dramatically with the scales evolution. So, it is the critical step to select suitable cartographic generalization algorithm which can perfectly fulfill the scale-transformation task. This problem is also a main obstacle in the way of automatic spatial data updating. Through deeply studying the flows of multi-scale spatial data updating process, an intelligent cartographic generalization algorithm selecting mode is proposed. Firstly cartographic generalization algorithm base, knowledge base and case base is built in this mode. Secondly, based on the step of resolving the cartographic generalization process into segments, a self-adaption cartographic generalization algorithm selecting architecture is constructed. Thirdly, an intelligent cartographic generalization algorithm selecting and using flow is established and put into effect. Overall, this mode provides a new idea to solve the automatic problem of multi-scale spatial data updating.


Title of the Paper: Facial Action Coding System for the Tongue

 

Authors: Rahma M. Tolba, Taha El-Arif, El-Sayed M. El Horbaty

Pages: 9-14

Abstract: FACS (Facial Action Coding System) is an anatomically based system for describing all observable facial movements. FACS provides a very reliable description for the face upper parts but it does not for the lower parts of the face. That limits FACS from being the dominant technique in the Facial Animation field. In this paper, we proposed 12 AUs (Action Units) for the Tongue, based on tongue anatomy, following the same format Paul Ekman used in defining FACS AUs. We applied these AUs on a 3D human model using Daz Studio Pro. and compared the results with photos that have been captured for real humans performing the proposed AUs. The results were very analogous to the ones performed by the real humans. Then we used the proposed AUs to make tongue animation for a very popular tongue movement in Egypt which is called Zaghrouta. The resulted animation was very realistic and almost identical to the video taken for an Egyptian woman performing this movement. Now we are having anatomically defined and reliable action units to control the tongue and overcame FACS limitation in this area. The proposed AUs offer additional pose control dials, as add-on for the existing computer graphics software, which give the animator more control and flexibility over the tongue and new levels of dynamic movements.


Title of the Paper: Massively Parallel Multiple Sequence Alignment on the Supercomputer JUQUEEN

 

Authors: Plamenka Borovska, Veska Gancheva

Pages: 1-8

Abstract: In silico biological sequence processing is a key task in molecular biology. This scientific area requires powerful computing resources for exploring large sets of biological data. Parallel in silico simulations based on methods and algorithms for analysis of biological data using high-performance distributed computing is essential for accelerating the research and reducing the investment. Multiple sequence alignment is a widely used method for biological sequence processing. The paper focuses on performance investigation and improvement of multiple biological sequence alignment software MSA_BG on the BlueGene/Q supercomputer JUQUEEN. Experimental simulations on the basis of parallel implementation of MSA_BG algorithm for multiple sequences alignment have been carried out for the case study of the influenza virus variability investigation. The objectives of the investigation are code optimization, porting, scaling, profiling and performance evaluation of MSA_BG software. A hybrid MPI/OpenMP parallelization has been developed and the advantages of this approach through the results of benchmark tests, performed on JUQUEEN have been shown. The experimental results show that the hybrid parallel implementation provides considerably better performance than the MPI only implementation.