ICLR 2025 Workshop on
Scalable Optimization for Efficient and Adaptive Foundation Models
(SCOPE)


Sunday, April 27th or 28th, 2025

collocated with ICLR 2025 in Singapore

About

In the rapidly evolving landscape of AI, the development of scalable optimization methods to yield efficient and adaptive foundation models has significant demand in the space of their inference service. In specific, enabling model efficiency while allowing them to be adaptable to various new downstream tasks has multifold challenges.

Firstly, the model's ability to quickly learn adaptive and efficient sub-model selection on different tasks requires the capability to perform continual weight updates, compute- and memory-efficient fine-tuning, and personalized adaptation.

Secondly, with the increased demand for long context understanding and reasoning, the model needs to yield such efficient adaptation with the informative usefulness of the query-specific token fetching. For instance, imagine a model that continually learns from current news events, adapting to the ever-changing global landscape by integrating up-to-date knowledge. Such models may not only need efficient fine-tuning to new incoming data stream, but also understand efficient handling of the KV cache that may keep on growing with the requirement to handle longer contextual information. Additionally, the integration of retrieval-augmented generation (RAG) into foundation models can ensure that generated content is not only relevant, but also reflects the most current knowledge while costing the prefill size to go up.

Thirdly, with such growing demand for contextual adaptation, mixture of experts (MoE) models have also received significant traction that can perform test time adaptation via learned routing policy. In addition, the emergence of sub-quadratic models with constant KV states as opposed to KV caching of transformers, has opened up a new avenue of the model's adaptation ability in the context of information retention into compressive KV states. These capabilities rely on techniques for adapting foundation models, including fine-tuning, conversion, distillation, and in-context/few-shot learning.

This workshop aims to capture advances in scalable, adaptive fine-tuning, calibration, and conversion to yield inference efficient quadratic and sub-quadratic foundation models, focusing on methodologies across vision, language, and multi-modal domains. Hosting this workshop at ICLR aligns with the conference’s mission to advance the frontiers of machine learning. The workshop aims to bring together interdisciplinary researchers from core ML/DL, efficient ML, computer vision, and NLP.


Call for Papers

This SCOPE workshop encourages submissions on novel algorithms, research results, and work-in-progress research on building scalable optimization for efficient and adaptive foundation models. We welcome high-quality original papers in the following two tracks:

  • Short/tiny paper track: with a maximum limit of 2 pages (as per the guideline of ICLR 2025 tiny paper track), excluding references and appendix.
  • Main paper track: with a maximum limit of 5 pages (with ICLR 2025 style files and templates), excluding references and appendix.

Outstanding papers will be selected for oral presentation, while all accepted papers will be allowed to be presented as posters.

Submission Link:

You can submit your papers on SCOPE OpenReview page.

Important dates:

All deadlines are anywhere on earth (AoE) time.

Paper submission deadline Monday, February 3rd, 2025
Reviewing deadline Friday, February 21st, 2025
Author notificiation Wednesday, March 5th, 2025
Camera-ready copy (CRC) deadline Wednesday, March 26th, 2025

Topics:

The relevant topics of interest at this workshop include (but are not limited to):

  • Efficient Long Context Understanding
  • Sub-Quadratic Models for Foundational Tasks and Personalization
  • Quadratic to Sub-Quadratic Model Conversion
  • Task Specific Adaptive Foundation Models
  • Retrieval Augmented Generation for Efficient Contextual Processing
  • Efficient Sub-Quadratic Foundation Models
  • Adaptive Fine-Tuning for Multimodal Foundation Models
  • Efficient Fine-Tuning for Continual Adaptation and Personalization
  • Model Optimization for Latency and Throughput Efficient Inference
  • Adaptive Routing with Mixture of Experts

General Guideline:

Format: All submissions must be in PDF format using the ICLR 2025 LaTeX style file . Please include the references and appendix in the same PDF as the main paper. The maximum file size for submissions is 50MB. Submissions that violate the ICLR style (e.g., by decreasing margins or font sizes) or page limits criteria of the workshop, may be rejected without further review.

Double-Blind Reviewing: The reviewing process will be double blind. As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information (even in the supplementary material or in the shared Github code link).

Dual-Submission Policy: We welcome ongoing and unpublished works. Authors are also encouraged to submit papers that are under review at any venue (e.g.TMLR under review) at the time of submission. In such cases, please be mindful about maintaining the anonymity for the other submission. For example, submission with the exact same paper title and exact same content as the other one, is discouraged. Authors are strongly discouraged to submit their previously accepted works that are already accepted and presented before ICLR’25.

Non-Archival: The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility: Submissions and reviews will not be made public. Only accepted papers across the different tracks will be made public on the workshop website.

Tiny Paper Track: We encourage the submission of diverse forms of preliminary work through the Tiny Papers Track. This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (exact page length to be determined by each workshop) paper submissions, with an eye towards diversity and inclusion; see call for tiny papers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on this link at the beginning of February and close on March 2nd. The goal of the Tiny Papers Track is to build on top of previous ICLR Tiny Papers track efforts to encourage submissions from under-represented, under-resourced, and budding researchers who may not (yet) have the resources to submit full papers. SCOPE-ICLR 2025 aims to leverage the Tiny Papers track to broaden participation across the diversity of research topics centered around scalable and efficient foundation models. We welcome submissions that provide new ideas or fresh perspectives to tackle some of the challenging problems under this theme. We encourage submissions up to 2 pages (excluding references and appendix) in length for tiny papers using the ICLR 2025 template (no abstract is required). Reviewers will be asked to evaluate the clarity, correctness, and potential reproducibility of the submissions. Submissions from underrepresented groups are especially encouraged.


Invited Speakers

Yu Cheng

Chinese University of Hong Kong

Pavlo Molchanov

NVIDIA Research

Zechun Liu

Meta Reality Labs

Zhangyang (Atlas) Wang

XTX Markets & University of Texas at Austin

Bryan Low

National University of Singapore & AI Singapore

Ziwei Liu

Nanyang Technological University


Organizers

Tianlong Chen

University of North Carolina at Chapel Hill

Shiwei Liu

University of Oxford

Haizhong Zheng

Carnegie Mellon University

Amir Yazdanbakhsh

Google DeepMind

Beidi Chen

Carnegie Mellon University

Yingyan (Celine) Lin

Georgia Institute of Technology



Contact

Reach out to scope2025@googlegroups.com for any questions.

Volunteering as a Reviewer

If you would like to volunteer as a reviewer, please fill out this form.