End-to-end pseudonymization of fine-tuned clinical BERT models : Privacy preservation with maintained data utility
{{output}}
Many state-of-the-art results in natural language processing (NLP) rely on large pre-trained language models (PLMs). These models consist of large amounts of parameters that are tuned using vast amounts of training data. These factors cause the models to memor... ...