Sparse Multiple Instance Learning – Recent advances in the field of sparse multi-instance learning have produced numerous new data for analysis in high-dimensional data, and in the context of many other applications, such as image segmentation and sparse coding. However, there is relatively little research on multi-instance data and is therefore a challenge in the field of data analysis. A recent paper by Natarajan et al. presents a new framework to analyze the multi-instance data with deep learning in such a way that the data can be partitioned into clusters. It is presented as a nonparametric approach for learning a linear classifier from a data manifold rather than a linear latent space, which we argue can provide insight into how deep learning can be used for this purpose.

Propositional formula matching (PFFM) aims to extract a specific formula from the input data. For this purpose, we use one-to-one correspondence between a formula and the input set to learn the relationship between the formulas and the values of a metric function in the matrix space. In particular, we propose a method that learns the relationship between a formula and every value of a metric function in different matrices. We define a matrix factorization-based model which learns the matrix metric function for each set of formulas to provide a measure of similarity between the formulas and the values of metric functions. We also propose a novel feature selection method for PFFM, which we call Recurrent Matrix Factorization (RBMF) feature selection. Our method performs well on benchmark databases as well as benchmark data. Empirical results demonstrate that our approach significantly outperforms other existing feature selection methods on PFFM and other well-known database datasets, including the FITC database (1,2,3).

Proceedings of the 2016 ICML Workshop on Human Interpretability in Artificial Intelligence

# Sparse Multiple Instance Learning

Learning and reasoning about spatiotemporal temporal relations and hyperspectral data

A Novel Feature Selection Method Using Backpropagation for Propositional Formula MatchingPropositional formula matching (PFFM) aims to extract a specific formula from the input data. For this purpose, we use one-to-one correspondence between a formula and the input set to learn the relationship between the formulas and the values of a metric function in the matrix space. In particular, we propose a method that learns the relationship between a formula and every value of a metric function in different matrices. We define a matrix factorization-based model which learns the matrix metric function for each set of formulas to provide a measure of similarity between the formulas and the values of metric functions. We also propose a novel feature selection method for PFFM, which we call Recurrent Matrix Factorization (RBMF) feature selection. Our method performs well on benchmark databases as well as benchmark data. Empirical results demonstrate that our approach significantly outperforms other existing feature selection methods on PFFM and other well-known database datasets, including the FITC database (1,2,3).

## Leave a Reply