The 5th New Frontiers in Summarization Workshop

EMNLP 2025

The Fifth Workshop on “New Frontiers in Summarization” aims to foster cross-fertilization of ideas in automatic summarization and related fields. It will cover novel paradigms, shared tasks, applied research, and future directions while accelerating the development of tools, datasets, and resources to meet the summarization needs of academia, industry, and government. As advances in natural language processing (e.g., pre-trained models and prompt-based learning) improve summarization performance, challenges remain in areas such as trustworthiness, interpretability, evaluation reliability, and the integration of knowledge and modalities for real-world deployment.

To tackle these challenges, we plan to expand the workshop’s scope beyond traditional summarization to include grounded text generation with retrieval, reference- and attribute-based summarization, multi-modal and long-form summarization, query-focused approaches, hallucination reduction, efficiency, and novel evaluation methods. This broader focus, particularly addressing the growing role of large language models (LLMs), is expected to attract wider engagement from the research community and push the boundaries of summarization research.

Keynote Spearkers

Mohit Bansal

Mohit Bansal

University of North Carolina at Chapel Hill

TBD

TBD

Arman Cohan

Arman Cohan

McGill University

TBD

TBD

Greg Durrett

Greg Durrett

The University of Texas at Austin

TBD

TBD

Alexander R. Fabbri

Alexander R. Fabbri

Salesforce

TBD

TBD

Mirella Lapata

Mirella Lapata

The University of Edinburgh

TBD

TBD

Jey Han Lau

Jey Han Lau

The University of Melbourne

TBD

TBD

Pengfei Liu

Pengfei Liu

Shanghai Jiao Tong University

TBD

TBD

Yulia Tsvetkov

Yulia Tsvetkov

University of Washington

TBD

TBD

Call for Papers

Both long paper (up to 8 pages with unlimited reference) and short paper (up to 4 pages with unlimited reference) are welcomed for submission!

A list of topics relevant to this workshop (but not limited to):

  • Abstractive, extractive, and hybrid summarization
  • Summarization with pre-trained large models
  • Zero-shot/few-shot summarization
  • Long-context summarization
  • Fairness in summarization: faithfulness, bias, toxicity, and privacy-preserving methods
  • Interpretability, controllability, and visualization of summarization systems
  • Reference- and attribute-based summarization
  • Query-focused summarization
  • Knowledge-injected summarization with retrieval
  • Multilingual summarization
  • Multimodal summarization (text, speech, image, video)
  • Multi-genre summarization (news, tweets, product reviews, conversations, medical records, etc.)
  • Semantic aspects of summarization (representation, inference, validity)
  • Cognitive and psycholinguistic aspects (readability, usability)
  • Development of new algorithms, datasets, and annotations
  • Development of new evaluation metrics
  • Hallucination reduction and trustworthiness in summarization
  • Efficiency in summarization and large model inference

Submission Instructions

You are invited to submit your papers in our START/SoftConf submission portal. All the submitted papers have to be anonymous for double-blind review. The content of the paper should not be longer than 8 pages for long papers and 4 pages for short papers, strictly following the ACL style templates, with the mandatory limitation section not counting towards the page limit. Supplementary and appendices (either as separate files or appended after the main submission) are allowed. We encourage code link submissions for the camera-ready version.

Dual Submission

NewSumm 2025 will allow double submission as long as the authors make a decision before camera-ready. We will not consider any paper that overlaps significantly in content or results with papers that will be (or have been) published elsewhere. Authors submitting more than one paper to NewSumm 2025 must ensure that their submissions do not overlap significantly (>25%) with each other in content or results. Authors can submit up to 100 MB of supplementary materials separately. Authors are highly encouraged to submit their codes for reproducibility purposes.

Fast-Track Submission

If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.5 (either avg soundness or excitement score), the paper is qualified to be submitted to the fast-track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised.

ACL Rolling Review (ARR) Submissions: Our workshop also welcomes submissions from ARR. Authors of any papers that are submitted to ARR and have their meta review ready may submit their papers and reviews for consideration for the workshop until 10 October 2025. This should include submissions to ARR for the 15 August deadline. The decision of publication will be announced by 17 October 2025. The commitment should be done via the workshop submission website: START/SoftConf submission portal (“ACL Rolling Review Commitment” submission type)

Non-archival Option

ACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. Authors have the flexibility to submit their unpublished research in a non-archival format, where only the abstract will be included in the conference proceedings. These non-archival submissions are expected to meet the same quality criteria as their archival counterparts and will undergo an identical review process. This option is designed to facilitate future publication opportunities in journals or conferences that disallow previously archived material. It also aims to foster engagement and constructive feedback on well-developed but yet-to-be-published work. Like archival submissions, non-archival entries must conform to the established formatting and length guidelines.

Important Dates:

  • Sep. 1, 2025: Workshop Submission Due Date

  • Oct. 10, 2025: Fast-Track Submission and ARR Commitment Deadline

  • Oct. 17, 2025: Notification of Acceptance (Direct, ARR, and Fast-Track Notification)

  • Oct. 24, 2025: Camera-ready Papers Due

  • Dec. 6, 2025: Workshop Date

Organizers

Yue Dong

Yue Dong
University of California, Riverside, USA

Wen Xiao

Wen Xiao
Microsoft Azure AI, Canada

Haopeng Zhang

Haopeng Zhang
University of Hawaii at Manoa, USA

Rui Zhang

Rui Zhang
Penn State University, USA

Ori Ernst

Ori Ernst
McGill University & Mila, Canada

Lu Wang

Wang Lu
University of Michigan, USA

Fei Liu

Fei Liu
Emory University, USA

Program Committee

  • Shmuel Amar (Bar-Ilan University)
  • Florian Boudin (JFLI, Nantes Université)
  • Avi Caciularu (Google)
  • Arie Cattan (Bar-Ilan University)
  • Hou Pong Chan (Alibaba DAMO Academy)
  • Khaoula Chehbouni (McGill University, Mila)
  • Ziling Cheng (McGill University & Mila)
  • Jackie Cheung (Mila / McGill)
  • Maxime Darrin (Mistral AI)
  • Felice Dell'Orletta (Istituto di Linguistica Computazionale “Antonio Zampolli” (CNR-ILC))
  • Ron Eliav (Bar-Ilan University)
  • Tobias Falke (Amazon AGI)
  • Lorenzo Flores (MILA Quebec)
  • Yu Fu (University of California, Riverside)
  • Eran Hirsch (Bar-Ilan University)
  • Zhe Hu (The Hong Kong Polytechnic University)
  • Xinyu Hua (Bloomberg)
  • Patrick Huber (Meta)
  • Hayate Iso (Megagon Labs)
  • Ayal Klein (Bar Ilan University)
  • Wojciech Kryscinski (Cohere)
  • Elena Lloret (University of Alicante)
  • Margot Mieskes (University of Applied Sciences, Darmstadt)
  • Manabu Okumura (Tokyo Institute of Technology)
  • Jessica Ouyang (UT Dallas)
  • G M Shahariar (University of California, Riverside)
  • Haz Sameen Shahgir (University of California Riverside)
  • Ori Shapira (OriginAI)
  • Aviv Slobodkin (Bar Ilan University )
  • Cesare Spinoso (McGill )
  • Esaú Villatoro Tello (Idiap Research Institute, CH)
  • David Wan (UNC Chapel Hill)
  • Haohan Yuan (ALOHA Lab, University of Hawaii at Manoa)
  • Yusen Zhang (Penn State University )
  • Nan Zhang (The Pennsylvania State University)
  • Shiyue Zhang (Bloomberg)
  • Ming Zhong (UIUC)
  • Xiyuan Zou (McGill / MILA)