Ethics and AI: take back control

Are researchers being pushed towards the unethical use of artificial intelligence? Can they change that?

To start with, a caveat. Any attendees hoping for practical guidance on how to address ethical issues in project grants involving artificial intelligence would have been disappointed by the ‘Ethics and integrity for AI in research’ webinar organised by the EU’s Scientific Advice Mechanism earlier this month. And any readers of this review hoping likewise will feel the same.

But if it was a deep dive into a hot, much-sensationalised topic that attendees were after, they probably left satisfied—although maybe not feeling very optimistic. For while there was none of the doomsaying polemic that often accompanies discussions of AI, there were frequent evocations of the societal and systemic factors that may have made the use of AI technology in research inherently ethically problematic. The central questions the webinar posed were: can researchers stop AI being so problematic? And if so, how? The answers were tentative.

Commercial imperatives

Karen Yeung, interdisciplinary professorial fellow in law, ethics and informatics at the University of Birmingham, UK, and co-author of the Science Advice for Policy by European Academies (Sapea) report on AI in research, published in April, addressed one of those systemic factors early on in her presentation.

She said that while “AI has been an enormous boon” in many disciplines, as it enables the acceleration and automation of tedious and labour-intensive tasks, particularly those involving “analysis of huge volumes of data”, its use presents numerous risks and challenges from an ethical standpoint.

Referring to the Sapea report, she said the authors noted “a really serious problem around opacity because tools were being used that were primarily developed in commerce, and for which we didn’t have access ‘under the hood’ to analyse what the tools were really doing, or [information on] how those tools had been created”.

Such opacity led to “knock-on effects down the line”, Yeung continued, when trying to replicate and provide full transparency about the findings generated by AI-enabled research. Intellectual property rights were essentially stopping the scientific community from understanding the biases and limitations of tools deployed in methodologies.

Yeung said this situation has been “somewhat mitigated by the introduction of the EU’s AI Act…but we still have a lack of systematic governance for ‘in the wild’ testing undertaken in industry, even when the potential [scientific and ethical] risks are really quite significant”.

Power imbalance

Barbara Prainsack, professor for comparative policy analysis at the University of Vienna, Austria, and chair of the European Group on Ethics in Science and New Technologies, discussed a further way that companies seek to protect their commercial advantage: by influencing the debate on ethics.

She said: “Industry has an interest in funding endowed chairs, in funding research projects on ethics. They have an interest in [promoting] a kind of ethics that makes processes a bit more ethical but leaves the political economy untouched; it leaves the distribution of power and agency untouched. [They further] a kind of lower-case ethics—some call it ‘ethics-washing’.”

In Prainsack’s view, the distribution of power and agency is what ethics in AI is all about: “Power is really key. Any kind of ethics…that doesn’t think about power in this field doesn’t deserve its name. We need to think about the development, the use, the infrastructure, the energy that is used for AI and data-driven practices, how all that affects the distribution of power; how certain things are made possible because some actors are more powerful.”

Such power analyses must include how AI is used in research and who gets to use it, Prainsack stressed. AI is already enabling more efficient working in the wealthiest institutions, and is thus reinforcing global inequalities. “Access to AI tools, with the energy and resources they require, is a privilege, and it’s very likely that its use [will continue to be split] between those that are privileged and those that are not—unless we have an ethics that changes that,” she said.

Regulation required

How could change be possible? When considering how to make AI companies act in a more transparent and ethical way (which would logically make use of AI tools in research more ethical), there was consensus across the panel that better regulation is needed.

The panellists acknowledged that companies would resist this, but said that was partly attributable to a commercial view of regulation as unnecessary ‘red tape’—a view that was misguided, Prainsack said. “Regulation…can also be enabling—think about antitrust law, which can protect smaller businesses from monopolisation. Regulation is also corporate welfare—so subsidies, concessions…If we get regulation right, the businesses with good ideas can benefit more than the ones who just squeeze everyone via the grey areas of the law to generate maximal profits.”

Moreover, Prainsack continued, “Good regulation does not, as it’s sometimes claimed, stifle innovation. Overzealous regulation…stifles innovation but good regulation is also good for technology developers—both in public and private [settings]—because it allows them to plan ahead.”

Sounder incentives

The focus then moved on to AI use in academia, with webinar chair Nausikaä El-Mecky, professor in history of art and visual culture at Pompeu Fabra University, Barcelona, Spain, asking panellists what incentives currently exist to ensure that “ambitious but overworked and [professionally] precarious researchers” will resist the lure of unethical and opaque uses of AI in their work.

Maura Hiney, adjunct professor of research integrity at University College Dublin, Ireland, and chair of the European Federation of Academies of Sciences and Humanities (Allea) permanent working group on science and ethics, replied that the question should be considered as part of the wider debate around research integrity.

Hiney said: “This brings us back to research assessment—what is it that we value as researchers and what is valued about us by our institutions? Until we change that, until we give much more value to doing research in a good, ethical, rigorous and repeatable way [rather than focusing on] how many papers you have in a high-impact journal…it’s going to be very hard to change how people work.”

This change in values feeding into research assessment is occurring, Hiney said, adding that it is imperative that the shift continues.

Way forward

Noting that all of the keys that might unlock a more ethical development and use of AI tools seemed to be held at the institutional and policy level, El-Mecky asked the central questions mentioned at the start of this article: given that AI tools arrive freighted with “a tremendous environmental and human cost”, and given the productive pressure placed on researchers, what can researchers do to make a more ethical AI possible? How could they be empowered to do so?

Yeung’s reply addressed the first question: “The situation of an individual researcher is relatively disempowered. The structural incentives are so profound that it’s very, very hard to swim against the tide, particularly if you’re a junior researcher. Incentives for speed of publication and number of papers is genuinely pernicious.”

Prainsack picked up this line of argument: “Institutions very often reward ‘fast science’ in the bad sense [where a showy publication or result is prioritised over scrupulousness]. As long as institutions do that, the rhetoric of empowering researchers will remain empty.”

Again, the discussion returned to systemic incentives that might encourage a more ethical use of AI. Funders have a role to play here, the panellists agreed, but said most funders have not yet stepped up to play it.

Hiney noted: “It is certainly something which funders are thinking about. But whether they are being specific in their terms and conditions about the responsible use of any tool or technology…Not really. However, they are being much more specific about people adhering to good research practices, and institutions having policies and processes in place.”

Should such stipulations be applied to AI tools, progress might be possible, she said.

The post Ethics and AI: take back control appeared first on Research Professional News.