Abstract
Text summarization is a well-established task
within the natural language processing (NLP)
community. However, the focus on controllable
summarization tailored to user requirements is
gaining traction only recently. While several
efforts explore controllability in text summarization, the investigation of Multi-Attribute
Controllable Summarization (MACS) remains
limited. This work addresses this gap by examining the MACS task through the lens of large
language models (LLMs), using various learning paradigms, particularly low-rank adapters.
We experiment with different popular adapter
fine-tuning strategies to assess the effectiveness
of the resulting models in retaining cues and
patterns associated with multiple controllable
attributes. Additionally, we propose and evaluate a novel hierarchical adapter fusion technique to integrate learnings from two distinct
controllable attributes. Subsquently, we present
our findings, discuss the challenges encountered, and suggest potential avenues for advancing the MACS task.