Call For Papers

📅 Important Dates

  • Submission Start: January 5, 2026, AoE
  • Submission Deadline: February 5, 2026, AoE
  • Notification of Acceptance: February 26, 2026, AoE
  • Workshop Date: April 26 or April 27, 2026
Submit your paper on OpenReview

Papers can be submitted via OpenReview. Please note that all authors are required to have an OpenReview account. Newly created profiles that do not use an institutional email address are subject to an OpenReview moderation process, which may take up to two weeks. We strongly encourage authors to create their accounts well in advance.

Location: ICLR 2026, Riocentro Convention & Event Center, Rio de Janeiro, Brazil


🧠 Workshop Theme

Never before has there been such a growing sentiment that AGI—a system capable of performing most tasks at or beyond human level—may be within reach. The central disagreements are shifting away from whether AGI is possible to when it will emerge and what its implications will be.

P-AGI provides a forum to explore how to make today’s scientific research directions resilient and meaningful in the event that AGI becomes a pervasive tool. We ask:

  • What are the limits of reasoning? Could capabilities scale beyond our control?
  • How will the scientific process evolve? Will machines generate discoveries for humans to validate, or will humans generate questions for machines to resolve?
  • How do we ensure trust? As discovery becomes automated, how do we maintain scalable oversight and safety?
  • What is the human role? How do we build systems that enable meaningful human-AI collaboration?

🧭 Submission Tracks

Submissions are non-archival and may include early-stage results, conceptual proposals, critiques, or speculative work. We encourage contributions across two primary tracks:

Track 1: Technical Foundations for a Post-AGI World

Focuses on the core technical challenges required to build, understand, and control highly capable AI systems. We prioritize work on safety, robustness, and the scalability of models that approach human-level intelligence. Suggested topics include:

  • Automated Scientific Discovery: AI-guided program search, symbolic-neural theorem proving, and "Deep Research" paradigms.
  • Scalable & Efficient Intelligence: Hardware-AI co-design, resource-aware scaling, and limits of reasoning systems.
  • Safety, Robustness & Alignment: Superalignment, scalable oversight, next-generation preference learning, and evaluating AGI-level reasoning.
  • Human-AI Collaboration: Transparency, interpretability, and methods enabling meaningful human oversight.

Track 2: Socio-Economical and Future Visions

Invites contributions that analyze the broader societal, ethical, and economic implications of AGI. We encourage speculative works and position papers. Suggested topics include:

  • Economic & Societal Impact: Studies on labor-market disruptions, future of work, and economically grounded benchmarks (e.g., GDPval).
  • Governance & Regulation: Proposals for governing general-purpose systems and comprehensive risk assessments (including Artificial Superintelligence pathways).
  • Foundational Questions: Computational creativity, machine consciousness, and ethical frameworks for AGI deployment.

📌 Submission Format (Tiny Papers)

  • Up to 4 pages, excluding references and appendix.
  • Non-archival (no proceedings; authors retain full rights).
  • Anonymized for review (Double blind).
  • Use the ICLR 2026 template.
  • Follow ICLR LLMs usage policy (allowed as general-purpose assist tools, not alowed for authorship).
  • All accepted submissions will be presented as posters; selected works will be invited for short talks.

Since 2025, ICLR has discontinued the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see https://iclr.cc/Conferences/2025/CallForTinyPapers for a history of the ICLR tiny papers initiative. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2026 will become available on https://iclr.cc/Conferences/2026/ at the beginning of February and close early March.

All ICLR participants, including authors, are required to adhere to the ICLR Code of Ethics (https://iclr.cc/public/CodeOfEthics) and ICLR code of conduct (https://iclr.cc/public/CodeOfConduct). More detailed guidance for authors, reviewers, and all other participants will be made available in due course, and participation will require acknowledging and adhering to the provided guidelines.


🧑‍🤝‍🧑 Mentorship & Inclusivity

We are committed to building a diverse, welcoming community. Selected posters will receive structured feedback during the session, and travel support will be available for underrepresented groups (pending sponsorship) in addition to official ICLR Financial Assistance.

We have prioritized diversity in our organizing committee and speaker lineup, spanning PhD students to senior academics with a 50/50 gender balance. We actively encourage submissions from members of communities including Black in AI, WiML, Queer in AI, and LatinX in AI.


👥 Interested in joining the P-AGI Program Committee?

We believe that maintaining high review quality is essential to the success of the workshop, and one of the most effective ways to achieve this is by keeping the review workload reasonable for each reviewer. Members of the P-AGI Program Committee will be officially acknowledged on the workshop website after the event. If you are interested in contributing to the P-AGI Workshop as a Program Committee member, please apply by completing this Google Form.