A BERT based Multi-task Learning Model for Judgment Prediction
首发时间:2019-04-01
Abstract:Judgment prediction is a difficult problem in judicial field. Given a fact description, judge need to conduct several documents to decide the related articles. This task is complex and requires a lot of energy. Previous works always treat this task as a multi-label learning paradigm for judgment prediction. These work usually neglect the external knowledge, thus the performance may be limited. In this paper, this topic use a multi-task learning framework with pretrained external knowledge to address this issues, and this topic propose a BERT based multi-task learning model(BMM for short). Specially, BMM use a pretrained BERT model to obtain external knowledge, then a multi-task learning framework is incorporated to learn multi-label classification and language model jointly. Experimental results on three real-world datasets demonstrate that the proposed model achieves significant improvements over state-of-the-art methods.
keywords: Judgment Prediction Multi-task learning Multi-label classification
点击查看论文中文信息
基于BERT的判决预测多任务学习模型
摘要:判决预测在司法领域是一个困难的问题。给定事实描述,法官需要阅读多个文档给定相关法条。这个任务是复杂并且需要很多精力。之前的工作通常将这个任务当作是多标签学习框架用于判决预测。这些工作通常忽略了外部知识,因此性能是非常有限的。在这篇论文中,本课题使用预训练外部知识的多任务学习框架去处理这个问题,并且本课题提出了一种基于 BERT 的多任务学习框架(简称 BMM)。 具体地,BMM 使用预训练的 BERT用于获取外部知识,之后一个多任务学习框架被引入用于同时学习多标签分类和语言模型。在三个数据集上的实验结果证明提出的模型比目前最好的模型有明显的提升。
基金:
引用
No.****
动态公开评议
共计0人参与
勘误表
基于BERT的判决预测多任务学习模型
评论
全部评论0/1000