aamc.org does not support this web browser.

    Balance Prediction and Understanding

    AI holds the promise of measuring aspects of applicant potential at a scale and precision previously unattainable. However, navigating the development and use of AI tools in selection is a delicate balance. And it is vital that those using AI understand the processes behind how the tools work and how to interpret their outcomes. This transparency not only aids in understanding but also contributes to the legitimacy and defensibility of the tools.

    As with any tool used in selection, what makes an AI tool valid, fair, and relevant is ensuring the chosen data and metrics reflect the qualities essential for success in each institution. Additionally, the data should meet high data quality standards. Failing to maintain both validity and understandability could pose risks to institutions and applicants.

    From principle to practice:

    • Identify characteristics linked to success. Start by clearly defining the desired characteristics and outcomes the institution intends to measure. Characteristics can be determined using focus groups, surveys, and questionnaires. Collaborate with faculty or assessment and evaluation staff to identify and obtain these data. Design or adapt existing AI tools to capture these qualities, ensuring they effectively target what defines an effective student or trainee.
    • Ensure understandability. Implement an AI solution that balances accuracy with clarity. Make sure to provide clear explanations for its results and instructions for how to appropriately use that information in selection decisions. This will aid in effective communication with the team that makes the selection decisions, as well as with applicants, leadership, and regulatory bodies.
    • Maintain simplicity. Use an AI tool that avoids unnecessary complexity in the number of variables and analytic techniques. This will make it easier for the team to understand and use the tool. It will also make it easier to explain to applicants, leadership, and regulatory bodies.
    • Establish and monitor standards for interpretability. Create and track standards, such as feature importance, model transparency scores, and user understanding scores, to ensure the model's decisions are understandable and trustworthy.