• List of Articles


      • Open Access Article

        1 - A Near Real-Time Data Warehouse Architecture Based on Ontology
        Data warehouse does not provide external data that are required to dynamically build after design and create the data warehouse. Therefore, analysts conduct effective analysis to find a correlation between external data and data warehouse data, and in other cases requir More
        Data warehouse does not provide external data that are required to dynamically build after design and create the data warehouse. Therefore, analysts conduct effective analysis to find a correlation between external data and data warehouse data, and in other cases requires a comparison both external data and data warehouse data together. The analyst forced to repeat some past repetitive situations. This includes creating terminology, measures and comparison. In this paper, for graduates of this problem, a real-time data warehouse architecture based on ontology is proposed. Furthermore, an algorithm to reduce the response time to users’ queries using materialized views and parallel processing is proposed. A case study to demonstrate how to create correlation between external data and data warehouse data is done and the results show the correlation between external data and data warehouse data is discovered. In experiments, using both direct and parent materialized views approaches in existing data warehouse architecture, reduce response time to users’ sequential, comparative and combination queries. Manuscript profile
      • Open Access Article

        2 - A Near Real-Time Data Warehouse Architecture Based on Ontology
        S. M. Shafaei S. M. Shafaei
        Data warehouse does not provide external data that are required to dynamically build after design and create the data warehouse. Therefore, analysts conduct effective analysis to find a correlation between external data and data warehouse data, and in other cases requir More
        Data warehouse does not provide external data that are required to dynamically build after design and create the data warehouse. Therefore, analysts conduct effective analysis to find a correlation between external data and data warehouse data, and in other cases requires a comparison both external data and data warehouse data together. The analyst forced to repeat some past repetitive situations. This includes creating terminology, measures and comparison. In this paper, for graduates of this problem, a real-time data warehouse architecture based on ontology is proposed. Furthermore, an algorithm to reduce the response time to users’ queries using materialized views and parallel processing is proposed. A case study to demonstrate how to create correlation between external data and data warehouse data is done and the results show the correlation between external data and data warehouse data is discovered. In experiments, using both direct and parent materialized views approaches in existing data warehouse architecture, reduce response time to users’ sequential, comparative and combination queries. Manuscript profile
      • Open Access Article

        3 - Presenting Technique for the Quantitative Evaluation of Image Color Reduction Algorithms by Explaining a Practical Sample
        M. Fateh E. Kabir
        In color reduction algorithms the result will be evaluated based on visual or qualitative standards. Evaluation without considering the quantitative standard wouldn't be a complete and accurate evaluation and trends of viewer are very effective on the evaluation. In som More
        In color reduction algorithms the result will be evaluated based on visual or qualitative standards. Evaluation without considering the quantitative standard wouldn't be a complete and accurate evaluation and trends of viewer are very effective on the evaluation. In some articles, the result will be evaluated with MSE. In this standard error the difference between the final images’ pixels color with first image will be considered as a failure in which is not a suitable technique for evaluating of color reduction methods. In images color reduction, if a color completely be replaced by a color closed to the original color it wouldn’t be considered as a failure. If these replacements don’t happen for all of those specific color pixels, then an error has happened in color reduction. The disintegration of the resulted colors from color reduction algorithm with desired colors should be considered in presenting the evaluation criteria since this will not be considered in MSE. In some of color reduction applications such as color reduction in the carpet cartoons, the final desired pixel color is specified and presenting the wrong color will be an error. Therefore, in such applications, the quantitative evaluation based on final color of each pixel is possible. By presenting criteria for quantitative evaluation, viewer trends wouldn't be considered in evaluation and the possibility of accurate comparison of color reduction algorithms would take place. In this article, we have presented a technique of quantitative evaluation for color reduction algorithms. When the final desired color for pixels are specified, this criteria would work out. To demonstrate the functionality of this quantitative evaluation technique, one of the applications of color reduction which is color reduction in carpet cartoons would be discussed. Several methods of color reduction would be evaluated based on proposed evaluation criteria and reference [42], had the lowest error. Manuscript profile
      • Open Access Article

        4 - A Parallel Bacterial Foraging Optimization Algorithm implementation on GPU
        A. Rafiee S. M. Mosavi
        Bacterial foraging algorithm is one of the population-based optimization algorithms that used for solving many search problems in various branches of sciences. One of the issues discussed today is parallel implementation of population-based optimization algorithms on Gr More
        Bacterial foraging algorithm is one of the population-based optimization algorithms that used for solving many search problems in various branches of sciences. One of the issues discussed today is parallel implementation of population-based optimization algorithms on Graphic Processor Units. Due to the low speed of bacterial foraging algorithm in the face of complex problem and also lack the ability to solve large-scale problems by this algorithm, Implementation on the graphics processor is a suitable solution to cover the weaknesses of this algorithm. In this paper, we proposed a parallel version of bacterial foraging algorithm which designed by CUDA and has ability to run on GPUs. The performance of this algorithm is evaluated by using a number of famous optimization problems in comparison with the standard bacterial foraging optimization algorithm. The results show that Parallel Algorithm is faster and more efficient than standard bacterial foraging optimization algorithm. Manuscript profile
      • Open Access Article

        5 - Precise Tracking of Moving Objects Using KLT, Sift and DBSCAN Algorithms
        A. Karamiani A. Karamiani
        Detecting and tracking of moving objects is an important task in analyzing videos. In this paper, we propose a new method for tracking several concurrent moving objects of fixed camera. In the proposed method, at each stage, the location of moving objects in front of ca More
        Detecting and tracking of moving objects is an important task in analyzing videos. In this paper, we propose a new method for tracking several concurrent moving objects of fixed camera. In the proposed method, at each stage, the location of moving objects in front of camera view is obtained information between two current and previous frames. In each step, Sift’s edge points is obtained based on previous frame and to get the correspondence of these feature points by the use of KLT feature point correspondence algorithm on the current frame. Then having correspondent feature points between two sequence frames, we would estimate the distance by eliminating partial or fixed moving feature points related to moving objects. The classification of labeled features as moving objects is done using DBSCAN clustering algorithm into different clusters. By this method and on each moment, the situation of all existing moving objects in camera view which has got by one by one correspondence between these objects, is determined. The obtained results of the proposed method shows a high degree of accuracy and acceptable consuming time to track moving objects. Manuscript profile
      • Open Access Article

        6 - Green Routing Protocol Based on Sleep Scheduling in Mobile Ad-Hoc Network
        Z. Movahedi A. Karimi
        Over recent years, green communication technology has been emerged as an important area of concern for communication research and industrial community. The reason of paying attention of this area is its effect on reducing environmental pollutions. According to recent re More
        Over recent years, green communication technology has been emerged as an important area of concern for communication research and industrial community. The reason of paying attention of this area is its effect on reducing environmental pollutions. According to recent research, a significant share of these pollutions is produced by the local area computer networks. A mobile ad-hoc network (MANET) is one of the widely used local area networks. The energy efficiency is important in MANETs not only from the green communication point of view, but also due to the network limitations in terms of battery lifetime. Of course, MANETs characterization such as distributed nature and lack of administration, nodes mobility, frequent topology changes and scare resources makes the greening trend a challenging task in such a context. In this paper, we propose and implement a green routing protocol for MANET which solves the idle energy consumption by allowing the necessary nodes and switching off the other un-utilized nodes. Simulation results show this can help to the 20 percentage of saving energy in the environment on average and also aware of the quality of service. Manuscript profile
      • Open Access Article

        7 - Model-Based Classification of Emotional Speech Using Non-Linear Dynamics Features
        A. Harimi A. Ahmadyfard A. Shahzadi K. Yaghmaie
        Recent developments in interactive and robotic systems have motivated researchers for recognizing human’s emotion from speech. The present study aimed to classify emotional speech signals using a two stage classifier based on arousal-valence emotion model. In this metho More
        Recent developments in interactive and robotic systems have motivated researchers for recognizing human’s emotion from speech. The present study aimed to classify emotional speech signals using a two stage classifier based on arousal-valence emotion model. In this method, samples are firstly classified based on the arousal level using conventional prosodic and spectral features. Then, valence related emotions are classified using the proposed non-linear dynamics features (NLDs). NLDs are extracted from the geometrical properties of the reconstructed phase space of speech signal. For this purpose, four descriptor contours are employed to represent the geometrical properties of the reconstructed phase space. Then, the discrete cosine transform (DCT) is used to compress the information of these contours into a set of low order coefficients. The significant DCT coefficients of the descriptor contours form the proposed NLDs. The classification accuracy of the proposed system has been evaluated using the 10-fold cross-validation technique on the Berlin database. The average recognition rate of 96.35% and 87.18% were achieved for females and males, respectively. By considering the total number of male and female samples, the overall recognition rate of 92.34% is obtained for the proposed speech emotion recognition system. Manuscript profile
      • Open Access Article

        8 - Speed up the Search for Proximity-Based Models
        J. Paksima A. Zareh V. Derhami
        One of the main challenges in the proximity models is the speed of data retrieval. These models define a distance concept which is calculated based on the positions of query terms in the documents. This means that finding the positions and calculating the distance is a More
        One of the main challenges in the proximity models is the speed of data retrieval. These models define a distance concept which is calculated based on the positions of query terms in the documents. This means that finding the positions and calculating the distance is a time consuming process and because it usually executed during the search time it has a special importance to users. If we can reduce the number of documents, retrieval process becomes faster. In this paper, the SNTK3 algorithm is proposed to prune documents dynamically. To avoid allocating too much memory and reducing the risk of errors during the retrieval, some documents' scores are calculated without any pruning (Skip-N). The SNTK3 algorithm uses three pyramids to extract documents with the highest scores. Experiments show that the proposed algorithm can improve the speed of retrieval. Manuscript profile
      • Open Access Article

        9 - Web Robot Detection Using Fuzzy Rough Set Theory
        S. Rahimi J. Hamidzadeh
        Web robots are software programs that traverse the internet autonomously. Their most important task is to fetch information and send it to the origin server. The high consumption of network bandwidth by them and server performance reduction, have caused the web robot de More
        Web robots are software programs that traverse the internet autonomously. Their most important task is to fetch information and send it to the origin server. The high consumption of network bandwidth by them and server performance reduction, have caused the web robot detection problem. In this paper, fuzzy rough set theory has been used for web robot detection. The proposed method includes 4 phases. In the first phase, user sessions have identified using fuzzy rough set clustering. In the second phase, a vector of 10 features is extracted for each session. In the third phase, the identified sessions are labeled using a heuristic method. In the fourth phase, these labels are improved using fuzzy rough set classification. The proposed method performance has been evaluated on a real world dataset. The experimental results have been compared with state-of-the-art methods, and show the superiority of the proposed method in terms of F-measure. Manuscript profile