TY - GEN
T1 - Lex Rosetta
T2 - 18th International Conference on Artificial Intelligence and Law, ICAIL 2021
AU - Savelka, Jaromir
AU - Westermann, Hannes
AU - Benyekhlef, Karim
AU - Alexander, Charlotte S.
AU - Grant, Jayla C.
AU - Amariles, David Restrepo
AU - Hamdani, Rajaa El
AU - Meeùs, Sébastien
AU - Troussel, Aurore
AU - Araszkiewicz, Michał
AU - Ashley, Kevin D.
AU - Ashley, Alexandra
AU - Branting, Karl
AU - Falduti, Mattia
AU - Grabmair, Matthias
AU - Harašta, Jakub
AU - Novotná, Tereza
AU - Tippett, Elizabeth
AU - Johnson, Shiwanni
N1 - Publisher Copyright:
© 2021 Owner/Author.
PY - 2021/6/21
Y1 - 2021/6/21
N2 - In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context have significant potential benefits in AI & Law because differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of Language-Agnostic Sentence Representations in sequence labeling models using Gated Recurrent Units (GRUs) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances the models' in-context performance.
AB - In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context have significant potential benefits in AI & Law because differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of Language-Agnostic Sentence Representations in sequence labeling models using Gated Recurrent Units (GRUs) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances the models' in-context performance.
KW - adjudicatory decisions
KW - annotation
KW - document segmentation
KW - domain adaptation
KW - multi-lingual sentence embeddings
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85110011554&partnerID=8YFLogxK
U2 - 10.1145/3462757.3466149
DO - 10.1145/3462757.3466149
M3 - Conference contribution
AN - SCOPUS:85110011554
T3 - Proceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021
SP - 129
EP - 138
BT - Proceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021
PB - Association for Computing Machinery, Inc
Y2 - 21 June 2021 through 25 June 2021
ER -