Search for Article,Author or ...


Current Issue

No 80
Vol. 80 No. 18
Download |

Last Published Articles

With the advent and development of IoT applications in recent years, the number of smart devices and consequently the volume of data collected by them are rapidly increasing. On the other hand, most of the IoT applications require real-time data analysis and low latency in service delivery. Under these circumstances, sending the huge volume of various data to the cloud data centers for processing and analytical purposes is impractical and the fog computing paradigm seems a better choice. Because of limited computational resources in fog nodes, efficient utilization of them is of great importance. In this paper, the scheduling of IoT application tasks in the fog computing paradigm has been considered. The main goal of this study is to reduce the latency of service delivery, in which we have used the deep reinforcement learning approach to meet it. The proposed method of this paper is a combination of the Q-Learning algorithm, deep learning, experience replay, and target network techniques. According to experiment results, The DQLTS algorithm has improved the ASD metric by 76% in comparison to QLTS and 6.5% compared to the RS algorithm. Moreover, it has been reached to faster convergence time than QLTS.
Pegah Gazori - Dadmehr Rahbari - Mohsen Nickray
Keywords : Internet of Things ، Fog computing ، Task Scheduling ، Deep reinforcement learning
Text generation is one of the important problems in Natural Language Processing field. The former methods for text generation that are based on language modeling by the teacher forcing approach encounter the problem of discrepancy between the training and test phases and also employing an inappropriate objective (i.e., Maximum Likelihood estimation) for generation. In the past years, Generative Adversarial Networks (GANs) have achieved much popularity due to their capabilities in image generation. These networks have also attracted attention for sequence generation in the last few years. However, since text sequences are discrete, GANs cannot be easily employed for text generation, and new approaches like Reinforcement Learning and approximation have been utilized for this purpose. Furthermore, the instability problem of GANs training causes new challenges. In this paper, a new GAN-based ensemble method is proposed for sequence generation problem. The idea of the proposed method is based on the ratio estimation which enables the model to overcome the problem of discreteness in data. Also, the proposed method is more stable than the other GAN-based methods. It also should be noted that the exposure bias problem of teacher forcing approach does not exist in the proposed method. Experiments show the superiority of the proposed method to previous GAN-based methods for text generation.
Ehsan Montahaie - Mahdieh Soleymani Baghshah
Keywords : Text generation ، generative model ، GANs ، ensemble learning
This research seeks to promote one of the widely being used algorithms in machine learning, known as the random forest algorithm. For this purpose, we use compression and parallelization techniques. The main challenge we address in this research is about application of the random forest algorithm in processing and analyzing big data. In such cases, this algorithm does not show the usual and required performance, due to the needed large number of memory access. This research demonstrates how we can achieve the desired goal by using an innovative compression method, along with parallelization techniques. In this regard, the same components of the trees in the random forest are combined and shared. Also, a vectorization-based parallelization approach, along with a shared-memory-based parallelization method, are used in the processing phase. In order to evaluate its performance, we run it on the Kaggle benchmarks, which are being used widely in machine learning competitions. The experimental results show that contribution of the proposed compression method, could reduce 61% of the required processing time; meanwhile, application of the compression along with the named parallelization methods could lead to about 95% of improvement. Overall, this research implies that the proposed solution can provide an effective step toward high performance computing.
Naeimeh Mohammad Karimi - - Mahdi Yazdian Dehkordi - Amin Nezarat
Keywords : Machine learning ، Random forest ، High Performance Computing ، Compression ، Parallelization ، Big Data
Some of important issues in acoustic echo cancellation (AEC) using adaptive filters are the sparseness of the acoustic path impulse responses and strong dependency of the convergence performance of adaptive algorithm to the eigenvalue spread of the input signal correlation matrix. These issues result in a performance degradation of the adaptive AEC systems. In this paper, to improve the performance of the LMS/Newton adaptive algorithm in AEC, the matrix inverse computation is modified. To this end, the matrix inversion lemma is employed such that the contribution of the matrix inverse in the weight update is initially high and as a result, the dependency of the adaptive algorithm to the eigenvalue spread is low during the initial convergence. In addition, for the step-size adjustment, an improved proportionate method is applied such that during the convergence, the contribution of those weights having higher amplitudes in the adaptation process is gradually varied to become identical at the end of convergence. The proposed adaptive proportionate method, results in both convergence rate and steady-state performance improvement for identification of sparse acoustic impulse responses. Simulation results using a colored speech-like signal shows the steady-state misalignment of the proposed algorithm is typically 6.5 dB lower than that of the LMS/Newton algorithm. Moreover, the convergence of the proposed algorithm is typically 3.6 sec faster than that of the PNLMS algorithm, to achieve a misalignment of -17 dB. Theoretical misalignment analyses in the transient and steady state are presented and verified with simulation results.
Mehdi Bekrani
Keywords : Acoustic echo ، adaptive filter ، correlation matrix ، sparse impulse response
Human physical activity recognition using gyroscope and accelerometer sensors of smartphones has attracted many researches in recent years. In this paper, the performance of principle component analysis feature extraction method and several classifiers including support vector machine, logestic regression, Adaboost and convolutional neural network are evaluated to propose an efficient system for human activity recognition. The proposed system can improve the classification accuracy in comparison with the state of the art researches in this field. The performance of a physical activity recognition system is expected to be robust on different smartphone platforms. The quality of smartphone sensors and their corresponding noises vary considerably between different smartphone models and sometimes within the same model. Therefore, it is beneficial to study the effect of noise on the efficiency of the human activity recognition system. In this paper, the robustness of the investigated classifiers are also studied in various level of sensor noises to find the best robust solution for this purpose. The experimental results, which is provided on a well-known human activity recognition dataset, show that the support vector machine with averaged accuracy of 96.34% perform more robust than the other classifiers on different level of sensor noises.
Mahdi Yazdian Dehkordi - Zahra Abedi - Nasim Khani
Keywords : Smartphone ، Gyroscope ، Accelerometer ، Human physical activity recognition ، Sensor quality ، Sensor noise

ابتداقبلی12بعدیانتها مشاهده 1 تا 5 ( از 10 رکورد)