aamc.org does not support this web browser.

    Protect Against Algorithmic Bias

    Incorporating AI into selection has the potential to enhance fairness, in part through the standardization of processes. To fully capitalize on this potential benefit, incorporate strategies to combat potential bias in the development and implementation of AI, given that the quality and fairness of AI depend on appropriate technologies, data, algorithms, and decisions relevant to the whole system.

    It is essential to use high-quality, representative data to inform thoughtfully developed AI systems and avoid biases and other systematic distortions. By following the guidelines set forth by the Data and Trust Alliance,1 you can more effectively choose the appropriate selection criteria. Moreover, while AI may improve efficiency, responsible oversight is crucial. Users must be educated about how to use AI appropriately. This balanced approach ensures that AI complements rather than supplants human judgment, thus maintaining the integrity and fairness of selection decisions.

    From principles to practice:

    • Form a diverse oversight committee. Assemble a multidisciplinary team comprising individuals from diverse backgrounds and areas of expertise, including members of the community and AI experts. This group will ensure that the use of AI in the selection process is scrutinized for fairness and representativeness.
    • Consider potential bias in the data. Evaluate the data for potential biases, such as lacking representation from applicant groups and relying on outcomes with known biases. Just because a particular variable or source of data can be used doesn’t mean it should be used. Interrogating the data will help manage the risk of the AI tool inadvertently perpetuating biases.
    • Conduct pilot tests. Before full implementation, pilot the AI tool in a low- or no-stakes setting to ensure that it runs smoothly within the context of the overall system. This preliminary testing phase is critical for refining the tool before it is implemented in a given selection system to ensure it operates as intended for different student groups. Additionally, include specific use case testing to identify and address any unintended consequences early on. This expanded testing will help validate the metrics of success for the desired AI models, ensuring they align with institutional goals and standards.
    • Do not change the process mid-cycle. Use the same data collection, analysis, and evaluation process for an entire application cycle. This consistency is vital for fairness and transparency. Track all changes in these processes when they occur.
    • Audit AI systems regularly. Schedule and conduct an annual audit of the AI system and its output to identify AI-related biases and other problems in the selection process. Collaborate with a dedicated team of experts to analyze the findings and develop strategies for continuous improvement to be implemented for the next cycle. Consult recent2 and relevant journal articles3 and technical reports that have used AI in selection processes, explore tools used to examine the potential for bias like Admissible ML4 or AI Fairness 360,5 and consult legal counsel when appropriate. 
    Sources Cited
    1. Data and Trust Alliance. Algorithmic bias safeguards. Published December 8, 2021. Updated July 9, 2024. https://dataandtrustalliance.org/work/algorithmic-safety-mitigating-bias-in-workforce-decisions Back to text ↑
    2. Keir G, Hu W, Filippi CG, Ellenbogen L, Woldenberg R. Using artificial intelligence in medical school admissions screening to decrease inter-and intra-observer variability. JAMIA Open. 2023;6(1). doi: 10.1093/jamiaopen/ooad011 Back to text ↑
    3. Rottman C, Gardner C, Liff J, Mondragon N, Zuloaga L. New strategies for addressing the diversity-validity dilemma with big data. J Appl Psychol. 2023;108(9):1425. doi: 10.1037/apl0001084 Back to text ↑
    4. H2O.ai. Admissible machine learning models. Updated July 9, 2024. Accessed June 11, 2024. https://docs.h2o.ai/h2o/latest-stable/h2o-docs/admissible.html Back to text ↑
    5. IBM. AI Fairness 360. Accessed June 11, 2024. https://aif360.res.ibm.com/ Back to text ↑