Playlist: Existential risk

6 items:

Review of 2023 with Karin Rudolph

Karin Rudolph is the Founder of Collective Intelligence a Bristol-based consultancy that provides resources and training to help startups, and SMEs embed Ethics into the design and development of technology. She is currently working on the launch of the Ethical Technology Network, a pioneering initiative to help businesses identify, assess, and mitigate the potential ethical and societal risks of emerging technologies. Karin has a degree in Sociology, studies in Philosophy and is a regular speaker at universities and conferences.


Art and AI collaboration with Sarah Brin

Sarah Brin is a futurist and digital creativity specialist. Sarah specializes in interdisciplinary tech collaborations, and has directed programs for organizations including Sony Interactive Entertainment, Autodesk, George RR Martin's immersive technology company Meow Wolf, the European Union, SFMOMA, and others.

Her research interests include new economic models for creatives, humanist applications of technology, and playful interventions. You can learn more about her work at sarahbrin.com, goodafternoon.uk.


Taming Uncertainty with Roger Spitz

Based in San Francisco, Roger Spitz is an international bestselling author, President of Techistential (Climate & Foresight Strategy), and Chair of the Disruptive Futures Institute. Spitz is an inaugural member of Cervest’s Climate Intelligence Council, a contributor to IEEE’s ESG standards, and an advisory partner of Vektor Partners (Palo Alto, London), an impact VC firm investing in the future of mobility. Techistential, Spitz’s renowned strategic foresight practice, advises boards, leadership teams, and investors on sustainable value creation and anticipatory governance. He developed the Disruptive Futures Institute into a preeminent global executive education center that helps organizations build capacity for futures intelligence, resiliency, and systemic change.

Spitz is an advisor, writer, and speaker on Artificial Intelligence, and has invested in a number of AI startups. From his research and publications, Roger Spitz coined the term Techistentialism which studies the nature of human beings, existence, and decision-making in our technological world. Today, we face both technological and existential conditions that can no longer be separated. Spitz chairs Techistential's Center for Human & Artificial Intelligence. He is also a member of IEEE, the Association for the Advancement of Artificial Intelligence (Palo Alto), and The Society for the Study of Artificial Intelligence & Simulation of Behaviour (UK).

Spitz has written four influential books as part of “The Definitive Guide to Thriving on Disruption” collection, which became an instant classic. He publishes extensively on decision-making in uncertain and complex environments, with bestselling books in Business Technology Innovation, Future Studies, Green Business, Sustainable Economic Development, Business Education, Strategic Management & Forecasting.

To learn more about Roger Spitz's work:
The Definitive Guide to Thriving on Disruption: www.thrivingondisruption.com
Techistential: www.techistential.ai
Disruptive Futures Institute: www.disruptivefutures.org


Responsible AI Research with Madhulika Srikumar

Madhulika Srikumar is a program lead at the Safety-Critical AI initiative at Partnership on AI, a multistakeholder nonprofit shaping the future of responsible AI. Core areas of her current focus include community engagement on responsible publication norms in AI research and diversity and inclusion in AI teams. Madhu is a lawyer by training and completed her graduate studies (LL.M) at Harvard Law School.

Managing the Risks of AI Research: Six Recommendations for Responsible Publication


AI alignment with Rohin Shah

Rohin is a 6th year PhD student in Computer Science working at the Center for Human-Compatible AI (CHAI) at UC Berkeley. His general interests in CS are very broad, including AI, machine learning, programming languages, complexity theory, algorithms, and security, and so he started his PhD working on program synthesis. However, he was convinced that it is really important for us to build safe, aligned AI, and so moved to CHAI at the start of his 4th year. He now thinks about how to provide specifications of good behaviour in ways other than reward functions, especially ones that do not require much human effort. He is best known for the Alignment Newsletter, a weekly publication with recent content relevant to AI alignment that has over 1600 subscribers.


AI future scenarios with Calum Chace

Calum Chace:

Calum Chace is a best-selling author of fiction and non-fiction books and articles, focusing on the subject of artificial intelligence. His books include “Pandora's Brain”, a techno-thriller about the first superintelligence, and “Surviving AI”, a non-fiction book about the promise and the challenges of AI.

He is a regular speaker on artificial intelligence and related technologies and runs a blog on the subject at www.pandoras-brain.com. He also serves as chairman and coach for a selection of growing companies.

A long time ago, Calum studied philosophy at Oxford University, where he discovered that the science fiction he had been reading since boyhood was actually philosophy in fancy dress.