Hannah Arendt Humanities Network Awards Inaugural Elkana Fellowship to Helga Nowotny for Research on AI and Predictive Algorithms
During the award ceremony at CEU in Vienna, Nowotny delivered an insightful keynote lecture that was livestreamed to a global audience, addressing the ambiguities surrounding the rise of artificial intelligence and predictive algorithms.
Watch video of the lecture here.
The lecture was followed by a four-day workshop on artificial intelligence and the digital humanities that drew on Nowotny’s forthcoming book, In AI We Trust: Power, Illusion, and Control of Predictive Algorithms. Scholars attending the workshop hailed from a wide array of network institutions, including CEU, Asheshi University in Ghana, Bard College in the US, Birkbeck College in the UK, BRAC University in Bangladesh, The Institute for Philosophy and Social Theory in Belgrade, the Institute for the Human Sciences in Vienna, and American University of Central Asia in Kyrgyzstan.
Nowotny’s address and the workshop discussions focused on a series of paradoxes regarding the limits of human certainty. Nowotny and the scholars established the fact that people have always yearned to know the future and master their fates. AI and predictive algorithms offer people the uncanny knowledge that gives useful insights on phenomena as diverse as the weather, the economy, and human behavior. With this unprecedented ability to predict the future, the demand for certainty grows, even though human life itself remains irrational and unpredictable. While artificial intelligence is deployed to simplify and master the world, artificial intelligence also makes the human world more complicated.
Nowotny and the scholars agreed that what people too often neglect to ask is: How does the desire and conviction to know the future–and the increasing ability to predict the future–affect humans? What happens to humans–fallible, passionate, unpredictable beings–when they put their trust and faith in AI and algorithms to predict and plan the future?
Several of the scholars warned against overestimating the power of artificial intelligence; they also worried that misplaced faith in and over-reliance on AI could foreclose future possibilities instead of creating new ones. One common theme was the concern that AI predictive algorithms might become self-fulfilling prophecies. In contrast to the yearning for certainty that is promised by AI there also exists the need for celebrating human fallibility and embracing uncertainty.
The scholars discussed the fact that new advances in machine neural networks allow machines to make judgments and explanations at the same capacity as humans. This raised the question of how to nurture human values in a world where ever-more corporate, social, and political decisions will be made by or alongside artificially intelligent neural networks.
The scholars also examined the question of what it means for humanity to retain meaningful control over the world, which led to yet another paradox: as people seek more control over their environments, the more they find that is beyond their control. For example, as expectations of safety rise, the desire for an ever-more complicated reliance on artificial intelligence emerges. As people turn to machines to regulate behavior and thus provide more security, they inevitably discover new and greater risks tied to the very tools at their disposal.
Nowotny’s central theory of the human desire to gain certainty and security through the use of artificial intelligence and the ever-more complicated relationship with technology provided rich fodder for the workshops building on her digital humanities research. Scholars drew on a wide range of knowledge and experience to address the paradoxes she introduced, clarifying some of the human impacts of technological advancement.
Post Date: 07-27-2021