Output list
Preprint
Posted to a preprint site 2025
ArXiv.org
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions.
Dataset
Published Winter 2025
Phenotype data of various traits, including plant height, panicle length, flowering time, node number, branch number and seed number, of over 1,000 Oat accessions from a natural population and Bannister mutants. The phenotype data were collected from 2021 to 2024 across multiple locations in Western Australia, including Perth, Manjimup, Williams and Mount Barker.
Dataset
Published Autumn 2025
Septoria disease scoring of over 1200 Oat accessions from different germplasm sets, including natural, breeding and mutant populations. The phenotype data were collected from multiple trials in Manjumup, Williams and glasshouse trials in Perth, Western Australia, from 2022 to 2023, as part of the GRDC-funded project UMU2404-010RTX (Further discovery of improved sources of Septoria resistance).