Abstract
Coherence is an important aspect of text quality and is crucial for ensuring its readability. It is an essential desirable for outputs from text generation systems like summarization, question answering, machine translation, question generation, table-to-text, etc. An automated coherence scoring model is also helpful in essay scoring or providing writing feedback. A large body of previous work has leveraged entity-based methods, syntactic patterns, discourse relations and more recently traditional deep learning architectures for text coherence assessment. We hypothesize that coherence assessment is a cognitively complex task which requires deeper models and can benefit from other related tasks. Accordingly, in this paper, we propose four different Transformerbased architectures for the task: vanilla Transformer, hierarchical Transformer, multi-task learning-based model, and a model with factbased input representation. Our experiments with popular benchmark datasets across multiple domains on four different coherence assessment tasks demonstrate that our models achieve state-of-the-art results outperforming existing models by a good margin.