Are democratic societies ready for a future where AI allocates algorithmically limited supplies of ventilators or hospital beds during pandemics? Or one in which AI fuels an arms race between creating and detecting disinformation? Or influence court decisions with amicus briefs written to mimic the rhetorical and argumentative style of Supreme Court judges?
Decades of research show that most democratic societies struggle to have nuanced debates about new technologies. These discussions should be fueled not only by the best available science, but also by the numerous ethical, regulatory and social considerations involved in their use. Difficult dilemmas posed by artificial intelligence are already occurring at a pace that overwhelms the ability of modern democracies to collectively work on these problems.
Broad public engagement, or the lack thereof, has long been a challenge in assimilating emerging technologies and is critical to addressing the challenges they pose.
Ready or not, unintended consequences
Striking a balance between the awesome possibilities of emerging technologies like AI and the need for societies to think about both intended and unintended outcomes is not a new challenge. Nearly fifty years ago, scientists and policymakers gathered in Pacific Grove, California, for what is often referred to as the Asilomar Conference, to decide the future of recombinant DNA research, or transplanting genes from one organism into another. Public participation and input into their deliberations was minimal.
Societies are severely limited in their ability to anticipate and mitigate the unintended consequences of rapidly emerging technologies such as AI, without the good faith involvement of broad cross-sections of public and expert stakeholders. And there are real downsides to limited participation. If Asilomar had sought such broad input fifty years ago, it is likely that the issues of cost and access would have been on the agenda with the science and ethics of deploying the technology. If that had happened, for example, the lack of affordability of recent CRISPR-based sickle cell treatments could have been avoided.
AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be apparent to elites such as technology leaders and policymakers. If societies fail to “ask the right questions that people care about,” scientist and technology scientist Sheila Jasanoff said in a 2021 interview, “then no matter what the science says, you wouldn’t come up with the right answers or options.” . for society.”
Even AI experts are concerned about how unprepared societies are to move forward with the technology responsibly. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed nearly 2,200 researchers who had published on the subject of AI. Nine in ten (90.3%) predicted there will be unintended consequences of AI applications, and three in four (75.9%) did not think society is prepared for the potential impacts of AI applications.
Who gets a say on AI?
Market leaders, policy makers and academics have been slow to adapt to the rapid rise of powerful AI technologies. In 2017, researchers and scientists gathered in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to host the first of a series of AI Insight Forums on September 13, 2023, to help Beltway policymakers rethink AI risk with technology leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.
Meanwhile, there is a hunger among the public for helping shape our collective future. Only about a quarter of US adults in our 2020 AI survey agreed that scientists “should be able to conduct their research without consulting the public” (27.8%). Two-thirds (64.6%) believed that “the public should have a say in how we apply scientific research and technology in society.”
The public’s desire for participation goes hand in hand with a widespread lack of trust in government and business when it comes to shaping AI development. In a 2020 national survey by our team, fewer than one in 10 Americans said they trust Congress (8.5%) or Facebook (9.5%) to promote the importance of keeping society in mind when developing AI .
A healthy dose of skepticism?
The public’s deep mistrust of major regulatory and industry players is not entirely unfounded. Industry leaders are struggling to detach their commercial interests from efforts to develop an effective regulatory system for AI. This has led to a fundamentally messy policy climate.
Tech companies helping regulators think about the potential and complexity of technologies like AI aren’t always difficult, especially if they’re transparent about potential conflicts of interest. However, input from technology leaders on technical questions about what AI can or could be used for is only one small piece of the regulatory puzzle.
It is much more urgent that societies need to figure out what kinds of applications AI should be used for, and how. Answers to these questions can only emerge from public debates involving a wide range of stakeholders on values, ethics and fairness. Meanwhile, the public is increasingly concerned about the use of AI.
AI may not wipe out humanity any time soon, but it is likely to increasingly disrupt life as we know it. Societies have a limited opportunity to find ways to enter into debates in good faith and collaborate on meaningful AI regulation to ensure these challenges do not overwhelm them.
This article was republished from The conversation under a Creative Commons license. Read the original article by Dietram A. Scheufele, Dominique Brossard and Todd Newmansocial scientists from the University of Wisconsin-Madison.