Artificial Intelligence (AI) research is accelerating, and its rapid development, innovations, and discoveries are already having an impact on society in quite dramatic ways such as autonomous vehicles, AI-generated music, poetry and storytelling, customer service bots and portals, and so on. The term “transformative AI” is used to describe a range of advances in AI that could impact on society in dramatic and difficult-to-reverse ways. Government policies and regulations will, undoubtedly, find it extremely difficult to keep up with the pace of technological progress with AI. Researchers are already working on advanced warning systems for any possible extreme events. However, AI forecasting based on measuring AI progress is at its early stages of development and its utility has been challenged by those who point out that it could never be able to account for the revolutionary breakthroughs and discoveries that have the potential to achieve AGI (Artificial General Intelligence), that will allow machines to adapt to a variety of situations to maximize their potential, or to achieve high-level machine intelligence (HLMI), to perform at the level of an average human adult on key cognitive measures necessary for economically relevant tasks, or to achieve “superintelligence,” that Nick Bostrom, states “greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014) Leading, some would argue, to the “AI control problem” that may be unresolvable.
We have assembled an exceptional panel for our second McLaughlin College Union Debate that will consider the following proposition:
The rapidly accelerating advances in artificial intelligence (AI) could pose, quite likely, a serious existential threat to humankind in the not too distant future.
Wednesday, November 25, 12:30 – 2:30 pm via Zoom