*For more insights into the humans behind the machine, please see Rumman Chowdhury’s profile
Algorithms and big data have unleashed a conversation on embedded bias. Political and social. Consequences, such as concentration of power. Irene Solaiman is bringing another critical value to the ethics equation – culture. What values and voices is generative AI mimicking? And how is it impacting people, relationships, societies?
Solaiman went from human rights to computer science. OpenAI – where she helped release GPT-2 to the public – to Hugging Face, a popular platform and community that helps users build, deploy, and train machine learning models. Here, she is focusing on ethical openness – how AI systems should be disseminated. Closed, open, or gradual access? Controlled or decentralised? Which kind is safe, responsible, beneficial – across cultural groups? Which allows insight to the data driving the algorithms?
The mechanics matter. Consider the increasing faith we are placing in genAI as decision-maker. There are the more obvious concerns in evidence – systems that hire white and male over others. Classify people of colour as fast-food workers or low-income earners. Answer with claims that women cannot handle numbers. And there are the more “stealth” practices – what do we deem as “sensitive”? How is “beauty” classified? What cultural norms and customs, social skills and personal values have been fed into large language models (LLMs)?
It is to no surprise that AI systems largely echo cultural values of “WEIRD” societies (Western, Education, Industrialised, Rich Democratic) – yet they are being deployed across diverse communities. Could taking the training data behind an LLM out of the lab and into the public square help get better feedback, course correct?
Solaiman believes that cultural value alignment will never be solved: “We’re always going to be figuring out how to empower different groups of people.” And so she spends time researching public policy and social impact around value alignment. Responsible releases. Misuse and malicious use. She is leading conversations on these urgent questions. Advising AI initiatives at OECD and IEEE. Because the near and now threats revolve around the human element. “We’ve hooked up a lot of our personal lives to social media to our bank accounts. I don’t fear AI systems getting access to nuclear codes. I fear people giving technical systems or autonomous systems this incredible power and access,” she explains.
At SYNAPSE, Irene Solaiman will talk about prompting OpenAI’s GPT-2 and GPT-3 in Bangla – the first time done from an OpenAI publication – and what she found. Whether LLMs should be open or closed. Why value alignment is important. And how not to overwrite, but empower, cultural differences.
