ICLR 2025 Workshop on Sparsity in LLMs (SLLM)

Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference

Call for Papers

We invite researchers working on efficient inference and training of large language models to submit their work for consideration to the SLLM workshop. We welcome submissions that make contributions on topics including, but not limited to:

Submission Details

Authors should upload a short paper of up to four pages in ICLR format, with unlimited pages of references and supplementary material. Note that the reviewers are not required to read the appendices and the claims of the paper should be supported by material in the main four page body. Please submit a single PDF that includes the main paper and supplementary material. We welcome submissions that present work which is unpublished or currently under submission. We will also consider recently published (i.e., in 2024/2025), whether they be from ICLR 2025, or venues other than ICLR 2025.

Tiny papers: 

This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see ​​https://iclr.cc/Conferences/2025/CallForTinyPapers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on https://iclr.cc/Conferences/2025/ at the beginning of February and close on March 2nd.

Following the changes above, we also offer a Tiny Papers Track for shorter works of up to two pages for work-in-progress and intermediate research milestones. We particularly encourage submissions from underrepresented, under-resourced, and early career researchers to share their experiences, gather feedback, and foster collaboration.

All submissions will be reviewed in a double-blind process and will be evaluated on the basis of their technical content and relevance to the workshop. Accepted papers will be selected to be presented either in a poster session or as a contributed talk. This workshop is non-archival and submissions can be submitted to other venues. The accepted papers will be publicly available through openreview before the start of the workshop.

If you have any questions, please contact us at sparse-llm-workshop@googlegroups.com.

OpenReview Submission

Submission of papers for the workshop is on OpenReview

https://openreview.net/group?id=ICLR.cc/2025/Workshop/SLLM

Camera Ready Submission:

Please copy into your latex project this updated ICLR style file to use the correct byline indicating the paper was accepted at the workshop (and not the main conference).

Important Dates

Full Paper Submission Deadline: February 3rd, 2025, 11:59 pm AoE February 7th, 2025, 11:59 pm AoE (Deadline Extended)

Accept/Reject Notification Date: March 5th, 2025, 11.59pm AoE

Camera-ready Submission Deadline: TBA

Workshop Date: April 27th, Sunday (Singapore)

Volunteering as a Reviewer

If you would like to volunteer as a reviewer, please fill out this form.

Discord

We will be monitoring our Discord channel for questions. 

Mentorship Program


We warmly invite you to take part in our mentorship program, designed to connect young researchers with experienced senior researchers (mentors). As part of the workshop, we will host a dedicated mentorship session that will serve as a kickoff meeting, introducing mentors and mentees to one another. This session will focus on providing guidance for navigating challenges in research and academia, fostering research collaborations, and offering advice on publishing, securing funding, and building professional relationships.  We encourage these conversations to continue over lunch and asynchronously throughout the event.

Call For Mentors: If you are open to serving as a mentor, please fill out this form by March 14, 2025.  


Call For Mentees: TBA 


LinkEmail