Logo image
Making sense of AI Search tools: A consistent way to measure performance
Conference presentation

Making sense of AI Search tools: A consistent way to measure performance

Erin Montagu
ANZREG Virtual Conference 2025 (ONLINE, 10/06/2025–12/06/2025)
11/06/2025

Abstract

With AI-powered search tools rapidly becoming embedded in library systems and databases, we face a growing challenge: how do we reliably evaluate these tools to make informed, evidence-based decisions? When deciding whether to activate a feature like Primo Research Assistant, we needed a clear, consistent, and efficient method of assessment for both library staff and the university community. In response, we developed a practical evaluation rubric, adapted from an open educational resource, and transformed it into a streamlined Microsoft Form. This tool allows library staff to assess AI search tools against key criteria quickly and confidently. In this session, we’ll share our rubric, walk through the evaluation process, and discuss how it is shaping our decisions around Primo Research Assistant and other AI tools.

Details

Metrics

49 Record Views
Logo image