Category: Uncategorized
-
Probabilistic Modeling of Time-Series for Spatio-Temporal Data with a Bayesian Network Adversary
Probabilistic Modeling of Time-Series for Spatio-Temporal Data with a Bayesian Network Adversary – The development and growth of deep reinforcement learning (DRL) has been fueled by the large amount and volume of data generated by a wide variety of real world problems. As a particular instance of this phenomenon, reinforcement learning (RL) has been proposed […]
-
Learning complex games from human faces
Learning complex games from human faces – In this paper, we present a simple model for representing semantic images that is both robust to human pose variations and to pose orientations. The proposed model is evaluated using a real-world mobile robot, the RoboBike. The RoboBike is a very smart and active robot, and its camera […]
-
Learning from the Fallen: Deep Cross Domain Embedding
Learning from the Fallen: Deep Cross Domain Embedding – This paper presents a novel and efficient method for learning probabilistic logic for deep neural networks (DNNs), which is trained in a semi-supervised setting. The method is based on the theory of conditional independence. As a consequence, the network learns to choose its parameter in a […]
-
Boosting and Deblurring with a Convolutional Neural Network
Boosting and Deblurring with a Convolutional Neural Network – Feature extraction and classification are two important applications of machine learning in computer vision. In this work, we propose a novel deep convolutional neural network architecture called RNN-CNet to automatically train image classifiers. The RNN architecture is based on a CNN architecture, and is capable of […]
-
A Computational Study of Bid-Independent Randomized Discrete-Space Models for Causal Inference
A Computational Study of Bid-Independent Randomized Discrete-Space Models for Causal Inference – We propose an efficient algorithm to perform classification and regression under some uncertainty in the causal information. The method uses random sample distributions of random variables, which is convenient for small samples of random data. The random variable is randomly drawn from the […]
-
Show full PR text via iterative learning
Show full PR text via iterative learning – We present a new approach to training human-robot dialogues using Recurrent Neural Networks (RNN). We propose to train a recurrent network for dialog parsing, and then train a recurrent network to learn dialog sequence. These recurrent neural networks are then used to represent dialog sequences. The proposed […]
-
Fast Convolutional Neural Networks via Nonconvex Kernel Normalization
Fast Convolutional Neural Networks via Nonconvex Kernel Normalization – In this work, we propose a new framework for learning deep CNNs from raw image patches. As a case study, we propose a novel and scalable method for learning deep CNNs using compressed convolutional neural networks (convNNs). We first show that constrained CNNs achieve state-of-the-art performance […]
-
A Comparative Study of Support Vector Machine Classifiers for Medical Records
A Comparative Study of Support Vector Machine Classifiers for Medical Records – In recent years some researchers have shown significant improvements for supervised learning with a small number of training data. In this paper, we study the performance of this approach in the biomedical domain by analyzing the neural networks, a class of recurrent neural […]
-
Dynamic Network Models: Minimax Optimal Learning in the Presence of Multiple Generators
Dynamic Network Models: Minimax Optimal Learning in the Presence of Multiple Generators – Recent research has shown that networks can be used to tackle several problems in both practical and industrial problems. The purpose of this article is to show that the network architecture of a distributed computer system using distributed computation is one of […]
-
Learning the Latent Representation of Words in Speech Using Stochastic Regularized LSTM
Learning the Latent Representation of Words in Speech Using Stochastic Regularized LSTM – This paper presents a novel way to model the utterances of a speaker (or other non-speaker) by using both the context structure and the language structure (e.g. grammatical structure). The resulting knowledge about sentence-level semantics can be efficiently used to model sentence-level […]