TY - GEN
T1 - DP-MLM
T2 - Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
AU - Meisenbacher, Stephen
AU - Chevli, Maulik
AU - Vladika, Juraj
AU - Matthes, Florian
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - The task of text privatization using Differential Privacy has recently taken the form of text rewriting, in which an input text is obfuscated via the use of generative (large) language models. While these methods have shown promising results in the ability to preserve privacy, these methods rely on autoregressive models which lack a mechanism to contextualize the private rewriting process. In response to this, we propose DP-MLM, a new method for differentially private text rewriting based on leveraging masked language models (MLMs) to rewrite text in a semantically similar and obfuscated manner. We accomplish this with a simple contextualization technique, whereby we rewrite a text one token at a time. We find that utilizing encoder-only MLMs provides better utility preservation at lower e levels, as compared to previous methods relying on larger models with a decoder. In addition, MLMs allow for greater customization of the rewriting mechanism, as opposed to generative approaches. We make the code for DP-MLM public and reusable, found at https://github.com/sjmeis/DPMLM.
AB - The task of text privatization using Differential Privacy has recently taken the form of text rewriting, in which an input text is obfuscated via the use of generative (large) language models. While these methods have shown promising results in the ability to preserve privacy, these methods rely on autoregressive models which lack a mechanism to contextualize the private rewriting process. In response to this, we propose DP-MLM, a new method for differentially private text rewriting based on leveraging masked language models (MLMs) to rewrite text in a semantically similar and obfuscated manner. We accomplish this with a simple contextualization technique, whereby we rewrite a text one token at a time. We find that utilizing encoder-only MLMs provides better utility preservation at lower e levels, as compared to previous methods relying on larger models with a decoder. In addition, MLMs allow for greater customization of the rewriting mechanism, as opposed to generative approaches. We make the code for DP-MLM public and reusable, found at https://github.com/sjmeis/DPMLM.
UR - http://www.scopus.com/inward/record.url?scp=85205282045&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85205282045
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 9314
EP - 9328
BT - 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Proceedings of the Conference
A2 - Ku, Lun-Wei
A2 - Martins, Andre
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
Y2 - 11 August 2024 through 16 August 2024
ER -