pre-trained language models for code (PLMCs) have gained attention in recent
research. These models are pre-trained on large-scale datasets using
multi-modal objectives. However, fine-tuning them requires extensive
supervision and is limited by the size of the dataset provided. We aim