Philosophy
Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence, biotechnologies and neuroscience. Wendell is the Uehiro/Carnegie Senior Fellow at the Carnegie Council for Ethics in International Affairs (CCEIA) where he co-directs (with Anja Kaspersen) the AI and Equality Initiative. He is also senior advisor to The Hastings Center and a scholar at the Yale University Interdisciplinary Center for Bioethics where he chaired Technology and Ethics studies for eleven years.
Wallach’s latest book, a primer on emerging technologies, is entitled "A Dangerous Master: How to keep technology from slipping beyond our control". In addition, he co-authored (with Colin Allen) "Moral Machines: Teaching Robots Right From Wrong" and edited the eight volume Library of Essays on the Ethics of Emerging Technologies published by Routledge in Winter 2017. He received the World Technology Award for Ethics in 2014 and for Journalism and Media in 2015, as well as a Fulbright Research Chair at the University of Ottawa in 2015-2016.
The World Economic Forum appointed Mr. Wallach co-chair of its Global Future Council on Technology, Values, and Policy for the 2016-2018 term, and he is presently a member of their AI Council. Wendell was the lead organizer for the 1st International Congress for the Governance of AI (ICGAI).
Dr Kerry McInerney (née Mackereth) is a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads the Global Politics of AI project on how AI is impacting international relations. She is also a Research Fellow at the AI Now Institute (a leading AI policy thinktank in New York), an AHRC/BBC New Generation Thinker (2023), one of the 100 Brilliant Women in AI Ethics (2022), and one of Computing’s Rising Stars 30 (2023). Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press), the collection The Good Robot: Why Technology Needs Feminism (2024, Bloomsbury Academic), and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press).
Eleanor is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence, and teaches AI professionals about AI ethics on a Masters course at Cambridge.
She specialises in using feminist ideas to make AI better and safer for everyone. She is also currently building the world's first free and open access tool that helps companies meet the EU AI act's obligations.
She has presented at the United Nations, The Financial Times, Google DeepMind, NatWest, the Southbank Centre, BNP Paribas, The Open Data Institute (ODI), the AI World Congress, the Institute of Science & Technology, and more. Her work on AI-powered video hiring tools and gendered representations of AI scientists in film was covered by the BBC, Forbes, the Guardian and international news outlets. She has appeared on BBC Moral Maze and BBC Radio 4 'Arts & Ideas'.
Eleanor is also the co-host of The Good Robot Podcast, where she asks key thinkers 'what is good technology?'. She also does lots of presentations for young people, and is a TikToker for Carole Cadwalladr's group of investigative journalists, 'The Citizens'.
She is also an expert on women writers of speculative and science fiction from 1666 to the present - An Experience of the Impossible: The Planetary Humanism of European Women’s Science Fiction.
She is the co-editor of The Good Robot: Feminist Voices on the Future of Technology, and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines.
She began her career in financial technology and e-commerce and co-founded a company selling Spanish ham online!
ADAM BRAUS is a professor and polymath professional, author, and expert in the fields of ethics, education, and organizational management. He is a writer, speaker, teacher, podcaster, coach, and consultant. He lives in San Francisco, California. You can subscribe to his weekly podcast at solutionsfromthemultiverse.com, find links to his books, or contact him via his website adambraus.com.
ALSO, you can find episode 68. of Solutions from the Multverse featuring Ben Byford here.
Alice Thwaite is a technology ethicist and philosopher. She founded the Echo Chamber Club and Hattusia, where she won the CogX Award for Outstanding Achievements and Research Contributions in AI Ethics. She currently works as Head of Ethics at OmniGOV, MGOMD.
Based in San Francisco, Roger Spitz is an international bestselling author, President of Techistential (Climate & Foresight Strategy), and Chair of the Disruptive Futures Institute. Spitz is an inaugural member of Cervest’s Climate Intelligence Council, a contributor to IEEE’s ESG standards, and an advisory partner of Vektor Partners (Palo Alto, London), an impact VC firm investing in the future of mobility. Techistential, Spitz’s renowned strategic foresight practice, advises boards, leadership teams, and investors on sustainable value creation and anticipatory governance. He developed the Disruptive Futures Institute into a preeminent global executive education center that helps organizations build capacity for futures intelligence, resiliency, and systemic change.
Spitz is an advisor, writer, and speaker on Artificial Intelligence, and has invested in a number of AI startups. From his research and publications, Roger Spitz coined the term Techistentialism which studies the nature of human beings, existence, and decision-making in our technological world. Today, we face both technological and existential conditions that can no longer be separated. Spitz chairs Techistential's Center for Human & Artificial Intelligence. He is also a member of IEEE, the Association for the Advancement of Artificial Intelligence (Palo Alto), and The Society for the Study of Artificial Intelligence & Simulation of Behaviour (UK).
Spitz has written four influential books as part of “The Definitive Guide to Thriving on Disruption” collection, which became an instant classic. He publishes extensively on decision-making in uncertain and complex environments, with bestselling books in Business Technology Innovation, Future Studies, Green Business, Sustainable Economic Development, Business Education, Strategic Management & Forecasting.
To learn more about Roger Spitz's work:
The Definitive Guide to Thriving on Disruption: www.thrivingondisruption.com
Techistential: www.techistential.ai
Disruptive Futures Institute: www.disruptivefutures.org
Marc Steen works as a senior research scientist at TNO, a research and technology organization in The Netherlands. He earned MSc, PDEng and PhD degrees in Industrial Design Engineering at Delft University of Technology. He worked at Philips and KPN before joining TNO. He is an expert in Human-Centred Design, Value-Sensitive Design, Responsible Innovation, and Applied Ethics of Technology and Innovation.
Marc's first book, Ethics for people who work in tech, was published by Taylor & Francis/CRC Press in October 2022.
Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, Research Fellow of the Earth System Governance Project, and Expert with the Global AI Ethics Institute. He is also a former U.S. Fulbright Scholar to Sri Lanka. Josh has published over two dozen articles and chapters on environmental politics, rights, and technology. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020).
Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University.
Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates and is an active contributor to the development of Ethics in AI.
Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna and author of more than 15 books including AI Ethics (MIT Press), The Political Philosophy of AI (Polity Press), and Introduction to Philosophy of Technology (Oxford University Press). Previously he was Vice Dean of the Faculty of Philosophy and Education, and President of the Society for Philosophy and Technology (SPT). He is also involved in policy advise, for example he was member of the High Level Expert Group on AI of the European Commission.
Reid Blackman, Ph.D., is the author of “Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press), Founder and CEO of Virtue, an AI ethical risk consultancy, and he volunteers as the Chief Ethics Officer for the non-profit Government Blockchain Association. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and sits on the advisory boards of several startups. His work has been profiled in The Wall Street Journal and Forbes and he has presented his work to dozens of organizations including Citibank, the FBI, the World Economic Forum, and AWS. Reid’s expertise is relied upon by Fortune 500 companies to educate and train their people and to guide them as they create and scale AI ethical risk programs. Learn more at reidblackman.com.
Ryan Carrier founded ForHumanity after a 25 year career in finance. His global business experience, risk management expertise and unique perspective on how to manage the risk led him to launch the non-profit entity, ForHumanity, personally. Ryan focused on Independent Audit of AI Systems as one means to mitigate the risk associated with artificial intelligence and began to build the business model associated a first-of-its-kind process for auditing corporate AIs, using a globally, open-source, crowd-sourced process to determine “best-practices”. Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. Prior to founding ForHumanity, Ryan owned and operated Nautical Capital, a quantitative hedge fund which employed artificial intelligence algorithms. He also was responsible for Macquarie’s Investor Products business in the late 2000’s. He worked at Standard & Poor’s in the Index business and for the International Finance Corporation’s Emerging Markets Database. Ryan has conducted business in over 55 countries and was a frequent speaker at industry conferences around the world. He is a graduate from the University of Michigan. Ryan became a Chartered Financial Analyst (CFA) in 2004.
Lofred Madzou is a Project Lead for AI at the World Economic Forum, where he oversees global and multistakeholder AI policy projects. He is also a research associate at the Oxford Internet Institute where he investigates various methods to audit AI systems.
Before joining the Forum, he was a policy officer at the French Digital Council, where he advised the French Government on technology policy. Most notably, he has co-written chapter 5 of the French AI National Strategy, entitled "What Ethics for AI?”. He has an MSc in Data Science and Philosophy from the University of Oxford.
Damien Patrick Williams (@Wolven) researches how technologies such as algorithms, machine intelligence, and biotechnological interventions are impacted by the values, knowledge systems, philosophical explorations, social structures, and even religious beliefs of human beings. Damien is especially concerned with how the consideration and treatment of marginalized peoples will affect the creation of so-called artificially intelligent systems and other technosocial structures of human societies. More on Damien's research can be found at AFutureWorthThinkingAbout.com
Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, and a Fellow at Hertford College at the University of Oxford. She works on privacy, technology, moral and political philosophy, and public policy. Véliz has published articles in media such as the Guardian, the New York Times, New Statesman, and the Independent. Her academic work has been published in The Harvard Business Review, Nature Electronics, Nature Energy, and The American Journal of Bioethics, among other journals. She is the author of Privacy Is Power (Bantam Press) and the editor of the forthcoming Oxford Handbook of Digital Ethics.
Dylan Doyle-Burke is currently a PhD student at the University of Denver studying Human-Computer Interaction, Artificial Intelligence Ethics, Public Policy and Religious Studies. His research focus is on creating a Theory of Mind for Artificial Intelligence and creating Equal Representation at every level of AI Product Development and implementation. Dylan holds a bachelors of arts from Sarah Lawrence College and a Masters of Divinity from Union Theological Seminary at Columbia University.
Dylan is an experienced keynote speaker and consultant and has presented at and worked alongside multi-national corporations, the United Nations, world-renowned hospital systems, and many other conferences and institutions to provide insight, consultation, and engaging talks focused on Artificial Intelligence Ethics, responsible technology, and more.
Dylan co-hosts the RadicalAI podcast with Jessie Smith.
Rebecca is a PhD candidate in Machine Ethics, and consultant in Ethical AI at Oxford Brookes University, Institute for Ethical Artificial Intelligence. Her PhD research is entitled 'Autonomous Moral Artificial Intelligence', and as a consultant she specialises in looking at developing practical approaches to embedding ethics in AI Products.
Her background is primarily philosophy. She completed her BA, then MA in philosophy at The University of Nottingham in 2010, before working in analytics for several different industries. As an undergraduate she had a keen interest in logic, metametaphysics, and the topic of consciousness, spurring her to come back into academia in 2017 to undertake a further qualification in psychology at Sheffield Hallam University, before embarking on her PhD.
She hopes she can combine her diverse interests to solving the challenge of creating moral machines.
In her spare time she can be found playing computer games, running, or trying to explore the world.
Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University.
Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates and is an active contributor to the development of Ethics in AI.
John Danaher is a Senior Lecturer in Law at the National University of Ireland (NUI) Galway, author of Automation and Utopia and coeditor of Robot Sex: Social and Ethical Implications. He has published dozens of papers on topics including the risks of advanced AI, the meaning of life and the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. His work has appeared in The Guardian, Aeon, and The Philosophers’ Magazine. He is the author of the blog Philosophical Disquisitions and hosts a podcast with the same name.
Bertram F. Malle is Professor of Cognitive, Linguistic, and Psychological Sciences and Co-Director of the Humanity-Centered Robotics Initiative at Brown University. Trained in psychology, philosophy, and linguistics at the University of Graz, Austria, he received his Ph.D. in psychology from Stanford University in 1995. He received the Society of Experimental Social Psychology Outstanding Dissertation award in 1995, a National Science Foundation (NSF) CAREER award in 1997, and he is past president of the Society of Philosophy and Psychology. Malle’s research, focuses on social cognition, moral psychology, and human-robot interaction. He has distributed his work in 150 scientific publications and several books. His lab page is http://research.clps.brown.edu/SocCogSci.
Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.
Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.
Kate Devlin Senior Lecturer in Social and Cultural Artificial Intelligence at King's College London. Her research in Human-Computer Interaction and Artificial Intelligence investigates how people interact with and react to technologies, both past and future. She is the author of Turned On: Science, Sex and Robots (Bloomsbury, 2018), which examines the ethical and social implications of technology and intimacy.
She tweets far too often as @drkatedevlin
Julia Mossbridge MA, PhD is a futurist trained in cognitive neuroscience. In addition to being the founder and research director of Mossbridge Institute, LLC, Dr. Mossbridge is a Visiting Scholar in the Psychology Department at Northwestern University, a Fellow at the Institute of Noetic Sciences, the Science Director at Focus@Will Labs, and an Associated Professor in Integral and Transpersonal Psychology at the California Institute of Integral Studies.
Her focus is on teaching and learning about love and time, and she pursues this focus by speaking about love and time, leading projects, conducting research, and coaching technology executives and engineers. She is currently engaged in four love-centered projects: 1) LOVING AIs, a project designed to bring unconditional love into artificial intelligence (especially artificial general intelligence), 2) a project in which she is examining whether hypnosis can be used to induce a state of unconditional love, 3) The Calling, her current book project about how love gets translated into life purpose, and 4) consciously bringing unconditional love into the lives of the tech workers and executives she coaches.
Tim works in academic research and commercial development of Artificial Life (ALife) and Artificial Intelligence (AI) technologies, with a particular interest in the foundational issues of true autonomy and open-ended creative evolution. He is also interested in the historical development of these ideas, and has recently written a book on the (very) early history of the idea of self-reproducing and evolving machines ("The Spectre of Self-Reproducing Machines: An Early History of Evolving Robots", currently under review with publisher). He holds an MA in Natural Sciences from the University of Cambridge (specialising in Experimental Psychology), followed by a MSc (with distinction) and PhD in Artificial Intelligence from the University of Edinburgh. He has held a wide variety of positions in academia and in tech companies, including work on evolutionary techniques in the games industry (MathEngine PLC, Oxford), postdoctoral research on swarm robotics (University of Edinburgh) and co-founder and CTO of a company developing continuous learning AI systems for fund management (Timberpost). He is an elected board member of the International Society for Artificial Life and an associate examiner for the University of London Worldwide.
Luciano Floridi is Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, Oxford Internet Institute, University of Oxford. He is also Professorial Fellow of Exeter College, Oxford and Turing Fellow and Chair of the Data Ethics Group of The Alan Turing Institute. The philosophy and ethics of information have been the focus of his research for a long time, and are the subject of his numerous publications, including The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Oxford University Press, 2014), winner of the J. Ong Award.
Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).
UWE Professor of Robot Ethics - Engineer, roboethicist and pro-feminist. Interested in robots as working models of life, evolution, intelligence and culture.
Links:
Alan's blog
EPSC principles of robotics
Robotics: A Very Short Introduction
Rob Wortham is currently undertaking a Computer Science PhD at the University of Bath researching autonomous robotics, with a focus on domestic applications and ethical considerations. How does human natural intelligence (NI) interact with AI, and how do we make the behaviour of these systems more understandable? What are the risks and benefits of AI, and how can we maximise the benefit to society, whilst minimising the risks? I am interested in real world AI for real world problems.
Previously Founder and CFO of RWA Ltd, a major international company developing IT systems for the leisure travel industry.
Dr Joanna J Bryson Reader at University of Bath, and Affiliate, Center for Information Technology Policy at Princeton University. Artificial & Natural Intelligence; Cognition, Culture, & Society; AI Ethics, Safety, & Policy.