40. AI alignment with Rohin Shah

This month we're talking to Rohin Shah about alignment problems in AI, constraining AI behaviour, current AI vs future AI, recommendation algorithms and extremism, appropriate uses of AI, the fuzziness of fairness, and Rohin’s love of coordination problems.
Date: 11th of March 2020
Podcast authors: Ben Gilburt with Rohin Shah
Audio duration: 37:46 | Website plays & downloads: 367 Click to download
Tags: Academic, Alignment, Berkeley, Newsletter | Playlists: Existential risk

Rohin is a 6th year PhD student in Computer Science working at the Center for Human-Compatible AI (CHAI) at UC Berkeley. His general interests in CS are very broad, including AI, machine learning, programming languages, complexity theory, algorithms, and security, and so he started his PhD working on program synthesis. However, he was convinced that it is really important for us to build safe, aligned AI, and so moved to CHAI at the start of his 4th year. He now thinks about how to provide specifications of good behaviour in ways other than reward functions, especially ones that do not require much human effort. He is best known for the Alignment Newsletter, a weekly publication with recent content relevant to AI alignment that has over 1600 subscribers.


No transcript currently available for this episode.

Episode host: Ben Gilburt

Ben Gilburt writes and talks about the future of technology, particularly machine ethics.

@RealBenGilburt