Parameter-efficient fine-tuning with layer pruning on medical sequence-to-sequence modeling
{{output}}
The increasing size of language models raises great research interests in parameter-efficient fine-tuning (PEFT) such as LoRA that freezes the main body of a pre-trained model, and injects small-scale trainable parameters for multiple downstream tasks (e.g., s... ...