DiscoverConsistently Candid
Consistently Candid
Claim Ownership

Consistently Candid

Author: Sarah Hastings-Woodhouse

Subscribed: 0Played: 28
Share

Description

AI safety, philosophy and other things.
11聽Episodes
Reverse
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the world's largest survey of machine learning researchers. We talked about the most interesting results from the survey, Katja's views on whether we should slow down AI progress, the best arguments for and against existential risk from AI, parsing the online AI safety debate and more! Follow Katja on Twitter Katja'...
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with state-of-art AI capabilities, whether we should be advocating for a pause on scaling frontier models, Nathan's Red Team in Public project, and some reasons not be a hardcore doomer!Follow Nathan on TwitterListen to The Cognitive Revolution
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sheha founded Encode Justice, the importance of youth advocacy in AI safety, and what the movement can learn from climate activism. We also dug into the details of SB-1047 and answered some common criticisms of the bill!Follow Sneha on Twitter: https://twitter.co...
Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more!Follow Nathan on Twitter: Nathan 馃攳 (@NathanpmYoung) / X (twitter.com) Nathan's substack: Predictive Text | Nathan Young | SubstackMy Twitter: sarah 鈴革笍 (@littIeramblings) / X (twitter.com)
A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an attempt to more completely understand why AI is going to kill us all, I enlisted the help of Noah Topper, recent Computer Science Masters graduate and long-time EY fan, to help me break down A List of Lethalities (which, for anyone unfamiliar, is a fun list of 43 reasons why we're all totally screwed). Follow Noah on Twitter: Noah Topper 馃攳鈴革笍 (@Noah...
Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising protests against frontier AGI research, the danger of relying on warning shots, the prospect of techno-utopia, possible risks of pausing and more!Follow Holly on Twitter: Holly 鈴革笍 Elmore (@ilex_ulmus) / X (twitter.com)Official PauseAI US Twitter account: PauseAI US 鈴革笍 (@pauseaius) / X (twitter.com)My Twitter: sarah 鈴革笍 (@littIeramblings) / X (twitter...
In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising existential risks, strategies for communicating AI risk, his assessment of recent AI policy developments and more!Find out more about PauseAI at www.pauseai.info
脡mile P. Torres is a philosopher and historian known for their research on the history and ethical implications of human extinction. They are also an outspoken critic of Effective Altruism, longtermism and the AI safety movement. In this episode, we chatted about why 脡mile opposes both the 'doomer' and accelerationist factions, and identified some or our agreements and disagreements about AI safety.
Darren McKee is an author, speaker and policy advisor who has recently penned a beginner-friendly introduction to AI Safety named Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. We chatted about the best arguments for worrying about AI, responses to common objections, how to navigate the online AI safety space as an non-expert, and more.Buy Darren's book on Amazon: https://www.amazon.co.uk/Uncontrollable-Threat-Artificial-Superintelligence-World-eboo...
Akash is an AI policy researcher working on ways to reduce global security risks from advanced AI. He has worked at the Center for AI Safety, Center for AI Policy, and Control AI. Before getting involved in AI safety, he was a PhD student studying technology & mental health at the University of Pennsylvania. We chatted about why he decided to work on AI safety, the current state of AI policy, advice for people looking to get involved in the field and much more!Follow Akash on Twitter: htt...
In this inaugural episode of Consistently Candid, Aaron Bergman and Max Alexander each try to convince me of their position on moral realism, and I settle the issue once and for all. Featuring occasional interjections from the sat-nav in the Uber Aaron was taking at the time.My Twitter: https://twitter.com/littIeramblings Max's Twitter: https://twitter.com/absurdlymaxAaron's Twitter: https://twitter.com/AaronBergman18
Comments
loading