DiscoverCloud Security Podcast by Google
Cloud Security Podcast by Google
Claim Ownership

Cloud Security Podcast by Google

Author: Anton Chuvakin

Subscribed: 220Played: 5,766
Share

Description

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.

We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit.

We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.
182 Episodes
Reverse
Guest: Zack Allen, Senior Director of Detection & Research @ Datadog, creator of Detection Engineering Weekly Topics: What are the biggest challenges facing detection engineers today? What do you tell people who want to consume detections and not engineer them? What advice would you give to someone who is interested in becoming a detection engineer at her organization? So, what IS a detection engineer? Do you need software skills to be one? How much breadth and depth do you need? What should a SOC leader whose team totally lacks such skills do? You created Detection Engineering Weekly. What motivated you to start this publication, and what are your goals for it? What are the learnings so far? You work for a vendor, so how should customers think of vendor-made vs customer-made detections and their balance?  What goes into a backlog for detections and how do you inform it? Resources: Video (LinkedIn, YouTube) Zacks’s newsletter: https://detectionengineering.net  EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity? The SRE book “Detection Spectrum” blog “Delivering Security at Scale: From Artisanal to Industrial” blog (and this too) “Detection Engineering is Painful — and It Shouldn’t Be (Part 1)” blog series “Detection as Code? No, Detection as COOKING!” blog “Practical Threat Detection Engineering: A hands-on guide to planning, developing, and validating detection capabilities” book SpecterOps blog  
Guests: Mitchell Rudoll, Specialist Master, Deloitte Alex Glowacki, Senior Consultant, Deloitte Topics: The paper outlines two paths for SOCs: optimization or transformation. Can you elaborate on the key differences between these two approaches and the factors that should influence an organization's decision on which path to pursue?  The paper also mentions that alert overload is still a major challenge for SOCs. What are some of the practices that work in 2024 for reducing alert fatigue and improving the signal-to-noise ratio in security signals? You also discuss the importance of automation for SOCs. What are some of the key areas where automation can be most beneficial, and what are some of the challenges of implementing automation in SOCs? Automation is often easier said than done… What specific skills and knowledge will be most important for SOC analysts in the future that people didn’t think of 5-10 years ago? Looking ahead, what are your predictions for the future of SOCs? What emerging technologies do you see having the biggest impact on how SOCs operate?  Resources: “Future of the SOC: Evolution or Optimization —Choose Your Path” paper and highlights blog “Meet the Ghost of SecOps Future” video based on the paper EP58 SOC is Not Dead: How to Grow and Develop Your SOC for Cloud and Beyond The original Autonomic Security Operations (ASO) paper (2021) “New Paper: “Future of the SOC: Forces shaping modern security operations” (Paper 1 of 4)” “New Paper: “Future of the SOC: SOC People — Skills, Not Tiers” (Paper 2 of 4)” “New Paper: “Future Of The SOC: Process Consistency and Creativity: a Delicate Balance” (Paper 3 of 4)”
Guests: Robin Shostack, Security Program Manager, Google Jibran Ilyas, Managing Director Incident Response, Mandiant, Google Cloud Topics: You talk about “teamwork under adverse conditions” to describe expedition behavior (EB). Could you tell us what it means? You have been involved in response to many high profile incidents, one of the ones we can talk about publicly is one of the biggest healthcare breaches at this time. Could you share how Expedition Behavior played a role in our response?   Apart from during incident response which is almost definitionally an adverse condition, how else can security teams apply this knowledge? If teams are going to embrace an expeditionary behavior mindset, how do they learn it? It’s probably not feasible to ship every SOC team member off to the Okavango Delta for a NOLS course. Short of that, how do we foster EB in a new team? How do we create it in an existing team or an under-performing team?   Resources: EP174 How to Measure and Improve Your Cloud Incident Response Readiness: A New Framework EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster? “Take a few of these: Cybersecurity lessons for 21st century healthcare professionals” blog Getting More by Stuart Diamond book Who Moved My Cheese by Spencer Johnson  book
Guest: Brandon Wood, Product Manager for Google Threat Intelligence Topics: Threat intelligence is one of those terms that means different things to everyone–can you tell us what this term has meant in the different contexts of your career?  What do you tell people who assume that “TI = lists of bad IPs”? We heard while prepping for this show that you were involved in breaking up a human trafficking ring: tell us about that! In Anton’s experience, a lot  of cyber TI is stuck in “1. Get more TI 2. ??? 3. Profit!” How do you move past that? One aspect of threat intelligence that’s always struck me as goofy is the idea that we can “monitor the dark web” and provide something useful. Can you change my mind on this one? You told us your story of getting into sales, you recently did a successful rotation into the role of Product Manager,, can you tell us about what motivated you to do this and what the experience was like? Are there other parts of your background that inform the work you’re doing and how you see yourself at Google?  How does that impact our go to market for threat intelligence, and what’re we up to when it comes to keeping the Internet and broader world safe? Resources: Video EP175 Meet Crystal Lister: From Public Sector to Google Cloud Security and Threat Horizons EP128 Building Enterprise Threat Intelligence: The Who, What, Where, and Why EP112 Threat Horizons - How Google Does Threat Intelligence Introducing Google Threat Intelligence: Actionable threat intelligence at Google scale A Requirements-Driven Approach to Cyber Threat Intelligence  
Guests: Omar ElAhdan, Principal Consultant, Mandiant, Google Cloud Will Silverstone, Senior Consultant, Mandiant, Google Cloud Topics: Most organizations you see use both cloud and on-premise environments. What are the most common challenges organizations face in securing their hybrid cloud environments? You do IR so in your experience, what are top 5  mistakes organizations make that lead to cloud incidents? How and why do organizations get the attack surface wrong? Are there pillars of attack surface? We talk a lot about how IAM matters in the cloud.  Is that true that AD is what gets you in many cases even for other clouds? What is your best cloud incident preparedness advice for organizations that are new to cloud and still use on-prem as well? Resources: Next 2024 LIVE Video of this episode / LinkedIn version (sorry for the audio quality!) “Lessons Learned from Cloud Compromise” podcast at The Defender’s Advantage “Cloud compromises: Lessons learned from Mandiant investigations” in 2023 from Next 2024 EP174 How to Measure and Improve Your Cloud Incident Response Readiness: A New Framework EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler
Guest: Seth Vargo, Principal Software Engineer responsible for Google's use of the public cloud, Google Topics: Google uses the public cloud, no way, right? Which one? Oh, yeah, I guess this is obvious: GCP, right? Where are we like other clients of GCP?  Where are we not like other cloud users? Do we have any unique cloud security technology that we use that others may benefit from? How does our cloud usage inform our cloud security products? So is our cloud use profile similar to cloud natives or traditional companies? What are some of the most interesting cloud security practices and controls that we use that are usable by others? How do we make them work at scale?  Resources: EP12 Threat Models and Cloud Security (previous episode with Seth) EP66 Is This Binary Legit? How Google Uses Binary Authorization and Code Provenance EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics IAM Deny Seth Vargo blog “Attention Is All You Need” paper (yes, that one)
Guest: Crystal Lister, Technical Program Manager, Google Cloud Security Topics: Your background can be sheepishly called “public sector”, what’s your experience been transitioning from public to private? How did you end up here doing what you are doing? We imagine you learned a lot from what you just described – how’s that impacted your work at Google? How have you seen risk management practices and outcomes differ? You now lead Google Threat Horizons reports, do you have a vision for this? How does your past work inform it? Given the prevalence of ransomware attacks, many organizations are focused on external threats. In your experience, does the risk of insider threats still hold significant weight? What type of company needs a dedicated and separate insider threat program? Resources: Video on YouTube Google Cybersecurity Action Team Threat Horizons Report #9 Is Out! Google Cybersecurity Action Team site for previous Threat Horizons Reports EP112 Threat Horizons - How Google Does Threat Intelligence Psychology of Intelligence Analysis by Richards J. Heuer The Coming Wave by Mustafa Suleyman  Visualizing Google Cloud: 101 Illustrated References for Cloud Engineers and Architects  
Guest: Angelika Rohrer, Sr. Technical Program Manager , Cyber Security Response at Alphabet Topics: Incident response (IR) is by definition “reactive”, but ultimately incident prep determines your IR success. What are the broad areas where one needs to prepare? You have created a new framework for measuring how ready you are for an incident, what is the approach you took to create it? Can you elaborate on the core principles behind the Continuous Improvement (CI) Framework for incident response? Why is continuous improvement crucial for effective incident response, especially in cloud environments? Can’t you just make a playbook and use it? How to overcome the desire to focus on the easy metrics and go to more valuable ones? What do you think Google does best in this area? Can you share examples of how the CI Framework could have helped prevent or mitigate a real-world cloud security incident? How can other organizations practically implement the CI Framework to enhance their incident response capabilities after they read the paper? Resources: “How do you know you are "Ready  to Respond"? paper EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?  
Guest: Shan  Rao, Group Product Manager, Google  Topics: What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems? Your talk covers 5 risks, why did you pick these five? What are the five, and are these the worst? Some of the mitigation seems the same for all risks. What are the popular SAIF mitigations that cover more of the risks? Can we move quickly and securely with AI? How? What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them? Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security?  Resource: Video (LinkedIn, YouTube)  [live audio is not great in these] “A cybersecurity expert's guide  to securing AI products with Google SAIF“ presentation SAIF Site “To securely build AI on Google Cloud, follow these best practices” (paper) “Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems” resources Corey Quinn on X (long story why this is here… listen to the episode)
Guests: None Topics: What have we seen at RSA 2024? Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)? Is this really all about AI? Is this all marketing? Security platforms or focused tools, who is winning at RSA? Anything fun going on with SecOps? Is cloud security still largely about CSPM? Any interesting presentations spotted? Resources: EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2) “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog “Decoupled SIEM: Brilliant or Stupid?” blog “Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog “Advancing the art of AI-driven security with Google Cloud” blog
Guest: Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google  Topics: Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)? What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical? Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really? Why do you think that AI favors the defenders? Is this a long term or a short term view? What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?  Resources: “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM) “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM) EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents EP40 2021: Phishing is Solved? EP135 AI and Security: The Good, the Bad, and the Magical EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It PyRIT LLM red-teaming tool Accelerating incident response using generative AI Threat Actors are Interested in Generative AI, but Use Remains Limited OpenAI’s Approach to Frontier Risk  
Guest: Payal Chakravarty, Director of Product Management, Google SecOps, Google Cloud Topics: What are the different use cases for GenAI in security operations and how can organizations  prioritize them for maximum impact to their organization? We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission? What are the challenges and risks associated with using GenAI in security operations? We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around? Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] Practical use cases for AI in security operations, Cloud Next 2024 session by Payal EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps 15 must-attend security sessions at Next '24  
Guests:  no guests (just us!) Topics: What are some of the fun security-related launches from Next 2024 (sorry for our brief “marketing hat” moment!)? Any fun security vendors we spotted “in the clouds”? OK, what are our favorite sessions? Our own, right? Anything else we had time to go to? What are the new security ideas inspired by the event (you really want to listen to this part! Because “freatures”...) Any tricky questions at the end? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] 15 must-attend security sessions at Next '24 Cloud CISO Perspectives: 20 major security announcements from Next ‘24 EP137 Next 2023 Special: Conference Recap - AI, Cloud, Security, Magical Hallway Conversations (last year!) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP90 Next Special - Google Cybersecurity Action Team: One Year Later! A cybersecurity expert's guide to securing AI products with Google SAIF Next 2024 session How AI can transform your approach to security Next 2024 session
Guests:  Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security Scott Coull, Head of Data Science Research, Google Cloud Security Topics: What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM? What can “security trained LLM” do better vs regular LLM? Does making it better at security make it worse at other things that we care about? What can a security team do with it today?  What are the “starter use cases” for SecLM? What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning? Resources: “How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models Supercharging security with generative AI  Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma? Considerations for Evaluating Large Language Models for Cybersecurity Tasks Introducing Google’s Secure AI Framework Deep Learning Security and Privacy Workshop  Security Architectures for Generative AI Systems ACM Workshop on Artificial Intelligence and Security Conference on Applied Machine Learning in Information Security  
Speakers:  Maria Riaz, Cloud Counter-Abuse, Engineering Lead, Google Cloud Topics: What is “counter abuse”? Is this the same as security? What does counter-abuse look like for GCP? What are the popular abuse types we face?  Do people use stolen cards to get accounts to then violate the terms with? How do we deal with this, generally? Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience? You have worked in academia and industry. What similarities or differences have you observed? Resources / reading: Video EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security P161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud “Art of War” by Sun Tzu “Dare to Lead” by Brene Brown "Multipliers" by Liz Wiseman
Guests: Evan Gilman, co-founder CEO of Spirl Eli Nesterov, co-founder CTO of Spril Topics: Today we have IAM,  zero trust and security made easy. With that intro, could you give us the 30 second version of what a workload identity is and why people need them?  What’s so spiffy about SPIFFE anyway?  What’s different between this and micro segmentation of your network–why is one better or worse?  You call your book “solving the bottom turtle” could you tell us what that means? What are the challenges you’re seeing large organizations run into when adopting this approach at scale?  Of all the things a CISO could prioritize, why should this one get added to the list? What makes this, which is so core to our internal security model–ripe for the outside world? How people do it now, what gets thrown away when you deploy SPIFFE? Are there alternative? SPIFFE is interesting, yet can a startup really “solve for the bottom turtle”?  Resources: SPIFFE  and Spirl “Solving the Bottom Turtle” book [PDF, free] “Surely You're Joking, Mr. Feynman!” book [also, one of Anton’s faves for years!] “Zero Trust Networks” book Workload Identity Federation in GCP
Guest: Ahmad Robinson,  Cloud Security Architect, Google Cloud Topics: You’ve done a BlackHat webinar where you discuss a Pets vs Cattle mentality when it comes to cloud operations. Can you explain this mentality and how it applies to security? What in your past led you to these insights?  Tell us more about your background and your journey to Google.  How did that background contribute to your team? One term that often comes up on the show and with our customers is 'shifting left.'  Could you explain what 'shifting left' means in the context of cloud security? What’s hard about shift left, and where do orgs get stuck too far right? A lot of “cloud people” talk about IaC and PaC but the terms and the concepts are occasionally confusing to those new to cloud. Can you briefly explain Policy as Code  and its security implications? Does PaC help or hurt security? Resources: “No Pets Allowed - Mastering The Basics Of Cloud Infrastructure” webinar EP33 Cloud Migrations: Security Perspectives from The Field EP126 What is Policy as Code and How Can It Help You Secure Your Cloud Environment? EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud  
Guest: Jennifer Fernick, Senor Staff Security Engineer and UTL, Google Topics: Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one? We’ve heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders? Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change? What is a post-quantum algorithm anyway? If we’re baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis?  Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution? How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us! Resources: Securing tomorrow today: Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST PQC standards PQ Crypto conferences “Quantum Computation & Quantum Information” by Nielsen & Chuang book “Quantum Computing Since Democritus” by Scott Aaronson book EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google  
Guest: Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud  Topics:  You had this epic 8 megatrends idea in 2021, where are we now with them? We now have 9 of them, what made you add this particular one (AI)? A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance?  What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking? Which one of the megatrends is the most contentious based on your presenting them worldwide? Is cloud really making the world of IT simpler (megatrend #6)? Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it? Which megatrend is manifesting the most strongly in your experience? Resources: Megatrends drive cloud adoption—and improve security for all and infographic “Keynote | The Latest Cloud Security Megatrend: AI for Security” “Lessons from the future: Why shared fate shows us a better cloud roadmap” blog and shared fate page SAIF page “Spotlighting ‘shadow AI’: How to protect against risky AI practices” blog EP135 AI and Security: The Good, the Bad, and the Magical EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security Secure by Design by CISA  
Guest: Kat Traxler, Security Researcher, TrustOnCloud Topics: What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it? How have you seen the CSPs take different approaches to IAM? What does it mean for the cloud users? Why do people still screw up IAM in the cloud so badly after years of trying? Deeper, why do people still screw up resource hierarchy and resource management?  Are the identity sins of cloud IAM users truly the sins of the creators? How did the "big 3" get it wrong and how does that continue to manifest today? Your best cloud IAM advice is “assign roles at the lowest resource-level possible”, please explain this one? Where is the magic? Resources: Video (Linkedin, YouTube) Kat blog “Diving Deeply into IAM Policy Evaluation” blog “Complexity: a Guided Tour” book EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP129 How CISO Cloud Dreams and Realities Collide  
loading
Comments 
loading