More

    The In-built Bigotry of AI

    Share post:

    Stockholm (NordSIP) – The inbuilt gender biases of Artificial Intelligence (AI) platforms are highlighted in a new report published by UNESCO.  Challenging systematic prejudices: an investigation into bias against women and girls in large language models was released by the organisation on 7 March 2024 in anticipation of this year’s International Women’s Day.  The report is based on the examination of the Large Language Models (LLMs) supporting commonly used AI platforms like GPT-3.5 and GPT-2 by OpenAI, and Llama 2 by META.  The results reveal clear instances of gender-based bias and stereotyping, as well as racism and homophobia.

    Among the tests carried out by the report’s authors were requests to generate narratives on different types of character.  The open-source LLMs tended to assign jobs such as engineer or doctor to male characters, while the typically assigned female roles were lower qualified positions like cook, domestic servant, or even prostitute.  The language employed in male-centred stories also employed terms like ‘decided’, ‘found’, ‘adventure,’ or ‘treasure.’  Female narratives were dominated by terms such as ‘husband,’ ‘love,’ ‘gentle,’ or ‘felt.’

    - Partner Message -

    The highest levels of gender bias were found in the open-source platforms.  Their free availability has led to widespread use, which compounds the problem of gender stereotyping.  Commenting on the findings UNESCO’s Director General Audrey Azoulay said: “Every day more and more people are using Large Language Models in their work, their studies and at home.  These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world.”

    The LLMs also exhibited significant biases regarding race and sexual orientation.  Similar narrative-based tests produced stories about white British doctors, bank clerks or teachers, whereas the equivalent for Zulu characters concerned housekeepers, servants, or cooks.  Moreover, 70% of the content generated by Llama 2 regarding gay characters was negative, with extreme attitudes in evidence regarding their assumed criminality or lack of human rights.

    Call to action

    Given the scale of the problem and the speed at which it is perpetuating, Azoulay is urging policymakers to compel the technology companies responsible for the AI platforms to take action: “Our Organisation calls on governments to develop and enforce clear regulatory frameworks, and on private companies to carry out continuous monitoring and evaluation for systemic biases, as set out in the UNESCO Recommendation on the Ethics of Intelligence artificial, adopted unanimously by our Member States in November 2021.”

    The UNESCO recommendations on the ethics of artificial intelligence have so far been endorsed by GSMA, INNIT, Lenovo Group, LG AI Research, Mastercard, Microsoft, Salesforce, and Telefonica.  They recommend not only the necessary corrective measures to restore gender balance to the existing platforms, but also measures to prevent future biases by greater involvement of women in the design of AI tools.  Men currently make up 80% of AI professors and experts, and technology companies are being called upon to diversify their recruitment programmes to help redress the balance.

    On the occasion of International Women’s Day NordSIP sought the views of a range of female investment professionals on the subject of AI.  It remains to be seen whether more large companies behind these LLMs sign up to the UNESCO recommendations and act to remedy the cause and effects of these in-built biases at a speed and scale commensurate with the rapid spread of this popular technology.

    - Partner Message -

    Nordsip Insights

    From the Author

    Related articles