Apply low-rank adaptation (LoRA) for very cheap, narrow-task fine-tuning when absolutely necessary to specialize a general model.