in

Nora Petrova, Machine Learning Engineer & AI Consultant at Prolific – Interview Series


Nora Petrova, is a Machine Learning Engineer & AI Consultant at Prolific. Prolific was founded in 2014 and already counts organizations like Google, Stanford University, the University of Oxford, King’s College London and the European Commission among its customers, using its network of participants to test new products, train AI systems in areas like eye tracking and determine whether their human-facing AI applications are working as their creators intended them to.

Could you share some information on your background at Prolific and career to date? What got you interested in AI? 

My role at Prolific is split between being an advisor regarding AI use cases and opportunities, and being a more hands-on ML Engineer. I started my career in Software Engineering and have gradually transitioned to Machine Learning. I’ve spent most of the last 5 years focused on NLP use cases and problems.

What got me interested in AI initially was the ability to learn from data and the link to how we, as humans, learn and how our brains are structured. I think ML and Neuroscience can complement each other and help further our understanding of how to build AI systems that are capable of navigating the world, exhibiting creativity and adding value to society.

What are some of the biggest AI bias issues that you are personally aware of?

Bias is inherent in the data we feed into AI models and removing it completely is very difficult. However, it is imperative that we are aware of the biases that are in the data and find ways to mitigate the harmful kinds of biases before we entrust models with important tasks in society. The biggest problems we’re facing are models perpetuating harmful stereotypes, systemic prejudices and injustices in society. We should be mindful of how these AI models are going to be used and the influence they will have on their users, and ensure that they are safe before approving them for sensitive use cases.

Some prominent areas where AI models have exhibited harmful biases include, the discrimination of underrepresented groups in school and university admissions and gender stereotypes negatively affecting recruitment of women. Not only this but the a criminal justice algorithm was found to have mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants in the US, while facial recognition technology still suffers from high error rates for minorities due to lack of representative training data.

The examples above cover a small subsection of biases demonstrated by AI models and we can foresee bigger problems emerging in the future if we do not focus on mitigating bias now. It is important to keep in mind that AI models learn from data that contain these biases due to human decision making influenced by unchecked and unconscious biases. In a lot of cases, deferring to a human decision maker may not eliminate the bias. Truly mitigating biases will involve understanding how they are present in the data we use to train models, isolating the factors that contribute to biased predictions, and collectively deciding what we want to base important decisions on. Developing a set of standards, so that we can evaluate models for safety before they are used for sensitive use cases will be an important step forward.

AI hallucinations are a huge problem with any type of generative AI. Can you discuss how human-in-the-loop (HITL) training is able to mitigate these issues?

Hallucinations in AI models are problematic in particular use cases of generative AI but it is important to note that they are not a problem in and of themselves. In certain creative uses of generative AI, hallucinations are welcome and contribute towards a more creative and interesting response.

They can be problematic in use cases where reliance on factual information is high. For example, in healthcare, where robust decision making is key, providing healthcare professionals with reliable factual information is imperative.

HITL refers to systems that allow humans to provide direct feedback to a model for predictions that are below a certain level of confidence. Within the context of hallucinations, HITL can be used to help models learn the level of certainty they should have for different use cases before outputting a response. These thresholds will vary depending on the use case and teaching models the differences in rigor needed for answering questions from different use cases will be a key step towards mitigating the problematic kinds of hallucinations. For example, within a legal use case, humans can demonstrate to AI models that fact checking is a required step when answering questions based on complex legal documents with many clauses and conditions.

How do AI workers such as data annotators help to reduce potential bias issues?

AI workers can first and foremost help with identifying biases present in the data. Once the bias has been identified, it becomes easier to come up with mitigation strategies. Data annotators can also help with coming up with ways to reduce bias. For example, for NLP tasks, they can help by providing alternative ways of phrasing problematic snippets of text such that the bias present in the language is reduced. Additionally, diversity in AI workers can help mitigate issues with bias in labelling.

How do you ensure that the AI workers are not unintentionally feeding their own human biases into the AI system?

It is certainly a complex issue that requires careful consideration. Eliminating human biases is nearly impossible and AI workers may unintentionally feed their biases to the AI models, so it is key to develop processes that guide workers towards best practices.

Some steps that can be taken to keep human biases to a minimum include:

  • Comprehensive training of AI workers on unconscious biases and providing them with tools on how to identify and manage their own biases during labelling.
  • Checklists that remind AI workers to verify their own responses before submitting them.
  • Running an assessment that checks the level of understanding that AI workers have, where they are shown examples of responses across different types of biases, and are asked to choose the least biased response.

Regulators across the world are intending to regulate AI output, what in your view do regulators misunderstand, and what do they have right?

It is important to start by saying that this is a really difficult problem that nobody has found the solution to. Society and AI will both evolve and influence one another in ways that are very difficult to anticipate. A part of an effective strategy for finding robust and useful regulatory practices is paying attention to what is happening in AI, how people are responding to it and what effects it has on different industries.

I think a significant obstacle to effective regulation of AI is a lack of understanding of what AI models can and cannot do, and how they work. This, in turn, makes it more difficult to accurately predict the consequences these models will have on different sectors and cross sections of society. Another area that is lacking is thought leadership on how to align AI models to human values and what safety looks like in more concrete terms.

Regulators have sought collaboration with experts in the AI field, have been careful to not stifle innovation with overly stringent rules around AI, and have started considering consequences of AI on jobs displacement, which are all very important areas of focus. It is important to thread carefully as our thoughts on AI regulation clarify over time and to involve as many people as possible in order to approach this issue in a democratic way.

How can Prolific solutions assist enterprises with reducing AI bias, and the other issues that we’ve discussed?

Data collection for AI projects hasn’t always been a considered or deliberative process. We’ve previously seen scraping, offshoring and other methods running rife. However, how we train AI is crucial and next-generation models are going to need to be built on intentionally gathered, high quality data, from real people and from those you have direct contact with. This is where Prolific is making a mark.

Other domains, such as polling, market research or scientific research learnt this a long time ago. The audience you sample from has a big impact on the results you get. AI is beginning to catch up, and we’re reaching a crossroads now.

Now is the time to start caring about using better samples begin and working with more representative groups for AI training and refinement. Both are critical to developing safe, unbiased, and aligned models.

Prolific can help provide the right tools for enterprises to conduct AI experiments in a safe way and to collect data from participants where bias is checked and mitigated along the way. We can help provide guidance on best practices around data collection, and selection, compensation and fair treatment of participants.

What are your views on AI transparency, should users be able to see what data an AI algorithm is trained on?

I think there are pros and cons to transparency and a good balance has not yet been found. Companies are withholding information regarding data they’ve used to train their AI models due to fear of litigation. Others have worked towards making their AI models publicly available and have released all information regarding the data they’ve used. Full transparency opens up a lot of opportunities for exploitation of the vulnerabilities of these models. Full secrecy does not help with building trust and involving society in building safe AI. A good middle ground would provide enough transparency to instill trust in us that AI models have been trained on good quality relevant data that we have consented to. We need to pay close attention to how AI is affecting different industries and open dialogues with affected parties and make sure that we develop practices that work for everyone.

I think it’s also important to consider what users would find satisfactory in terms of explainability. If they want to understand why a model is producing a certain response, giving them the raw data the model was trained on most likely will not help with answering their question. Thus, building good explainability and interpretability tools is important.

AI alignment research aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles. Can you discuss how AI workers are trained and how this is used to ensure the AI is aligned as best as possible?

This is an active area of research and there isn’t consensus yet on what strategies we should use to align AI models to human values or even which set of values we should aim to align them to.

AI workers are usually asked to authentically represent their preferences and answer questions regarding their preferences truthfully whilst also adhering to principles around safety, lack of bias, harmlessness and helpfulness.

Regarding alignment towards goals, ethical principles or values, there are multiple approaches that look promising. One notable example is the work by The Meaning Alignment Institute on Democratic Fine-Tuning. There is an excellent post introducing the idea here.

Thank you for the great interview and for sharing your views on AI bias, readers who wish to learn more should visit Prolific.

Azure Cognitive Search: OpenAI Hackathon

Climate change is killing coral — can AI help protect the reefs?

Climate change is killing coral — can AI help protect the reefs?