Main Findings:
AI systems are increasingly being used to shift decisions made by humans over to automated systems, potentially limiting the space for democratic participation. The risk that AI erodes democracy is exacerbated where most people are excluded from the ownership and production of AI technologies that will impact them.
AI learns through datasets but, very often, that data excludes key parts of the population. Where marginalized groups are considered, datasets often contain derogatory terms, or exclude explanatory contextual information, that is hard to accurately categorise in a format that AI can process. Resulting biases within AI design raise concerns as to the quality and representativeness of AI-based decisions and their impact on society.
There is very little two-way communication between the developers and users of AI-technologies such that the latter function only as personal data providers. Being largely excluded from the development of AI’s role in human decision-making, everyday individuals may feel more marginalized and disinterested in building a healthy and sustainable society.
Yet, AI’s capacity for seeing patterns in big data provides new ways to reach parts of the population excluded from traditional policymaking. It can serve to identify structural discrimination and include information from those otherwise ignored in important decisions. AI could enhance public participation by both providing decision-makers with better data and helping to communicate complex decisions – and their consequences – to wider parts of the population.