-
52浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
期刊论文
Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-Label Learning
IEEE Transactions on Knowledge and Data Engineering ( Early Access ),2019,(): 1 - 1 | 2019年11月06日 | 10.1109/TKDE.2019.2951561
Multi-label learning deals with training examples each represented by a single instance while associated with multiple class labels, and the task is to train a predictive model which can assign a set of proper labels for the unseen instance. Existing approaches employ the common assumption of equal labeling-importance, i.e. all associated labels are regarded to be relevant to the training instance while their relative importance in characterizing its semantics are not differentiated. Nonetheless, this common assumption does not reflect the fact that the importance degree of each relevant label is generally different, though the importance information is not directly accessible from the training examples. In this paper, we show that it is beneficial to leverage the implicit relative labeling-importance (RLI) information to help induce multi-label predictive model with strong generalization performance. Specifically, RLI degrees are formalized as multinomial distribution over the label space, which can be estimated by either global label propagation procedure or local k-nearest neighbor reconstruction. Correspondingly, the multi-label predictive model is induced by fitting modeling outputs with estimated RLI degrees along with multi-label empirical loss regularization. Extensive experiments clearly validate that leveraging implicit RLI information serves as a favorable strategy to achieve effective multi-label learning.
【免责声明】以下全部内容由[张敏灵]上传于[2020年11月30日 11时37分10秒],版权归原创者所有。本文仅代表作者本人观点,与本网站无关。本网站对文中陈述、观点判断保持中立,不对所包含内容的准确性、可靠性或完整性提供任何明示或暗示的保证。请读者仅作参考,并请自行承担全部责任。
本学者其他成果
同领域成果