DiscoverThe Privacy Advisor Podcast
The Privacy Advisor Podcast
Claim Ownership

The Privacy Advisor Podcast

Author: Privacy Professionals

Subscribed: 508Played: 8,183
Share

Description

The International Association of Privacy Professionals is the largest and most comprehensive global information privacy community and resource, helping practitioners develop and advance their careers and organizations manage and protect their data. More than just a professional association, the IAPP provides a home for privacy professionals around the world to gather, share experiences and enrich their knowledge.

Founded in 2000, the IAPP is a not-for-profit association with more than 70,000 members in 100 countries. The IAPP helps define, support and improve the privacy profession through networking, education and certification.

This podcast features IAPP Editorial Director Jedidiah Bracy, who interviews privacy pros and thought leaders from around the world about technology, law, policy and the privacy profession.
190 Episodes
Reverse
On 4 Sept., the Court of Justice of the European Union gave its highly anticipated decision in the EDPS v. SRB case. In its landmark ruling, the CJEU clarified the definition of personal data under the EU General Data Protection Regulation, and, in essence, the scope of EU data protection law. For Ulrich Baumgartner, a partner at Baumgartner Baumann and IAPP Country Leader for the DACH region, the ruling demonstrates a continued "relative approach" by the court, but it also provides a significant clarification against what he believes has been an "absolutist" approach by the European Data Protection Supervisor and other EU data protection authorities. Though the ruling provides important clarity for personal data, pseudonymity and anonymity, it also raises other questions. Either way, there are concrete takeaways for data protection professionals. IAPP Editorial Director Jedidiah Bracy recently caught up with Baumgartner to discuss the implications of the ruling, including what it can mean for the Data Act, data processing agreements and more. 
Ruby Zefo has long been a leader in the fields of privacy, data protection and cybersecurity. She was the first chief privacy officer at Uber, where she served from 2018, helping lead the company’s efforts to protect and enable user data. She has done so while Uber continues to innovate its technology amid a dramatic increase in digital laws around the world. Earlier this year, Zefo announced her retirement from Uber and her next move as a fellow at Stanford University’s Distinguished Careers Institute. IAPP Editorial Director Jedidiah Bracy caught up with Zefo to discuss her work building a privacy team at Uber and how she has navigated—and led—in an increasingly complex and challenging world.
Nearly a year ago, the IAPP expanded its mission in response to a rapidly changing digital environment to include AI governance, digital responsibility and cybersecurity law. The mission expansion took place a year after the IAPP hired Ashley Casovan to lead its first-ever AI Governance Center. Since then, Casovan has led the development of the center, which includes work helping to inform AI governance training and certification, a forthcoming AI governance textbook, and the AI Governance Global conferences.    Casovan came to the IAPP after leading the Responsible AI Institute as its executive director and previously worked for the Canadian government as director of data architecture and innovation.    She’s currently drafting a skills competency framework for AI governance.    Situated in Montreal, Casovan trekked south to spend time at IAPP headquarters in Portsmouth, NH. While here, she and IAPP Editorial Director Jedidiah Bracy discussed the makings of an AI governance professional. What skills are required and what is she seeing in this evolving profession? Here’s what she had to say. 
As the privacy profession surpasses the quarter-century mark and enters into a brave new world of artificial intelligence and digital entropy, it’s worth taking a look back to assess how far the profession has come. That’s exactly what long-time privacy pro Stephen Bolinger embarked upon when he decided to film a documentary on the rise of the privacy profession. "Privacy People" explores the veritable plethora of interpretations of the privacy concept through the voices of some of the profession’s most seasoned and respected privacy leaders. Bolinger said he felt there was a really compelling story to tell. By juxtaposing on-the-street interviews with individuals to sit down discussions with some of privacy’s luminaries in government, industry, civil society and academia, "Privacy People" looks at how this dynamic profession has grown and changed over the years, as well as recognizing the prominent role women have played throughout its evolution. Earlier this year, IAPP Editorial Director Jedidiah Bracy sat down with Bolinger to discuss the impetus for his documentary and how he went about filming "Privacy People."
As chief privacy officer of the biggest city in the United States, it’s safe to say that Michael Fitzpatrick doesn’t have your normal, run-of-the-mill job. As part of New York’s Office of Technology and Innovation, the Office of Information Privacy provides guidance to more than 175 agency privacy officers across the city. It also works closely with the city’s Cyber Command and has partnered with the Cities Coalition for Digital Rights and the Biometrics Institute. IAPP Editorial Director Jedidiah Bracy caught up with Fitzpatrick to learn more about his work as CPO of New York City, how his office works across government and what he sees as some of the biggest challenges in privacy and cybersecurity.
Autonomous robots with embedded artificial intelligence are growing more common across industry sectors. So-called “embodied AI,” collects vast amounts of data through its sensors and changes how humans interact with technology. As embodied AI becomes more common and continues to drive innovation, it also creates new challenges for ethical uses of data and personal privacy. Erin Relford is a privacy engineer at Google and has worked in the embodied AI space. In a recent article for the IAPP, she wrote that “existing privacy mitigations may be insufficient for human-robot interactions.” That’s why she helped create a robotics privacy framework to “promote privacy-preserving design” in the “responsible deployment of robotics with embedded AI. IAPP Editorial Director Jedidiah Bracy caught up with Erin to discuss her work in this vanguard space.
Privacy law and technological advancements have a deep and intertwined history that go back to at least the 1890s with Samuel Warren and Louis Brandeis's article "The Right to Privacy," which was prompted by camera technology. George Washington University Law Professor Dan Solove has long studied and written about privacy law. He published several well-known books including "Nothing to Hide: The False Trade Off Between Privacy and Security" and co-authored "Privacy Law Fundamentals," which is published by the IAPP. Solove recently published a new book, "On Privacy and Technology." IAPP Editorial Director Jedidiah Bracy caught up with Solove just before the book was published to discuss it and whether the regulation-versus-innovation trade-off is a fallacy, why the notice-and-choice paradigm hasn't worked for consumers, and where the future will take privacy, AI, and cybersecurity law and regulation.
Australia made waves in 2024 after it passed an amendment to the Online Safety Act of 2021, which introduces a legal minimum age of 16 to create and use an account for certain social media platforms in Australia. It also requires platforms within scope to implement age-gating practices. As Australia’s first eSafety Commissioner, Julie Inman-Grant, whose agency administers the Online Safety Act and the Social Media Minimum Age amendment, has been at the forefront of regulating online safety since her appointment in 2017. With a background in the private sector, including stints at Microsoft, Twitter and Adobe, Inman-Grant has a wide-ranging view of the online space and the harms within it. IAPP Editorial Director Jedidiah Bracy recently caught up with Commissioner Inman-Grant to discuss her work in online safety, what’s currently underway regarding age-gating requirements for social media and the effects AI will have for online safety and harms.
Though it came close in recent years, federal privacy legislation is not likely top of mind as a new administration takes the reigns in Washington, DC. The same likely goes for federal AI governance and safety legislation with a divided Congress and executive branch that promotes a deregulatory posture. That means state-level privacy and AI bills will proliferate in 2025. Connecticut was the 5th U.S. state to a pass comprehensive privacy law, and Connecticut State Senator James Maroney played a large role in crafting his state's bill. Maroney is now working on AI legislation and takes part in the Future of Privacy Forum’s Multistate AI Policymaker Working Group, which comprises more than 200 bipartisan state lawmakers and other government officials, with the aim to “foster a shared understanding of emerging technologies and related policy issues.” IAPP Editorial Director Jedidiah Bracy recently caught up with Maroney to discuss his work on privacy, his experience working with other policymakers in the multistate working group, and what to expect from AI legislation in Connecticut this coming year. 
It's hard to believe we’ve reached the final weeks of 2024, a year filled with policy and legal developments across the map. From the continued emergence of AI governance, to location privacy enforcement, children’s online safety to novel forms of privacy litigation, no doubt this was a year that kept privacy and AI governance pros very busy. One such professional in the space is Goodwin Partner Omer Tene. He’s been immersed in many of these thorny issues, and as always, has thoughts about what’s transpired in 2024 and what that means for the year ahead. I caught up with Tene to discuss the year in digital policy. Here's what he had to say.
  AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that’s been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice. Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
As the U.S. enters the final stretch of the 2024 election cycle, we face a tight race at the presidential and congressional levels. With a razor-thin margin separating Vice President Kamala Harris and former president Donald Trump, we decided to take a look at the possible policy positions of each campaign with regard to privacy and artificial intelligence governance. Of course, reading tea leaves is no easy feat, but while attending IAPP Privacy. Security. Risk. 2024 in Los Angeles, California, IAPP Editorial Director Jedidiah Bracy sat down with Managing Director, D.C., Cobun Zweifel-Keegan, CIPP/US, CIPM, to gain his insight on each camp's policy positions, from the administrative state to international data transfers and beyond. Here's what he had to say.  
The year 2024 proved to be another robust one for emerging U.S. state privacy law. Seven states joined the ranks, bringing the total up to 19.   Unlike previous years, however, 2024 underwent a paradigm shift away from the standard framework influenced by the draft Washington State Privacy Act. For the Future of Privacy Forum's Keir Lamont, CIPP/US, and Husch Blackwell's David Stauss, CIPP/E, CIPP/US, CIPT, FIP, PLS, 2024 marked the end of what Lamont calls the "Pax Washingtonia" era for state privacy law.   While attending the IAPP Privacy. Security. Risk. conference in Los Angeles, California, IAPP Editorial Director Jedidiah Bracy caught up with Lamont and Stauss to discuss this busy year in state privacy law, as well as what to expect with rulemaking and enforcement at the state level.
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.  IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
With the proliferation of comprehensive U.S. state privacy laws in recent years, there’s been an understandable focus by privacy professionals on this growing patchwork. But privacy litigation is also on the rise and the plaintiff’s bar has explored some novel theories, particularly around the use of onlin tracking technologies. Greenberg Traurig Shareholder Darren Abernethy advises clients in the ad tech, data privacy and cybersecurity space and is familiar with these recent litigation trends involving theories related to pen registers, chatbots, session replay, Meta pixels, software development kits and the Video Privacy Protection Act. Here’s what he had to say about these growing litigation trends.
For many of us following along with the EU AI Act negotiations, the road to a final agreement took many twists and turns, some unexpected. For Laura Caroli, this long, complicated road has been a lived experience. As the lead technical negotiator and policy advisor to AI Act co-rapporteur Brando Benefei, Caroli was immersed in high stakes negotiations for the world’s first major AI legislation. IAPP Editorial Director Jedidiah Bracy spoke with Caroli in a candid conversation about her experience and policy philosophy, including the approach EU policy makers took in crafting the AI Act, the obstacles negotiators faced, and how it fundamentally differs from the EU General Data Protection Regulation. She addresses criticisms of the act, highlights the AI-specific rights for individuals, discusses the approach to future proofing a law that regulates such a rapidly developing technology, and looks ahead to what a successful AI law will look like in practice.
In tandem with privacy, cybersecurity law is rapidly evolving to meet the needs of an increasingly digitized and complex economy. To help practitioners keep up with this ever-changing space, the IAPP published the first edition of Cybersecurity Law Fundamentals in 2021. But there have been a lot of developments since then. Cybersecurity Law Fundamentals author Jim Dempsey, lecturer at UC Berkeley Law School and senior policy advisor at Stanford Cyber Policy Center, brought on a co-author, John Carlin, partner at Paul Weiss and former Assistant Attorney General, to help with the new edition. IAPP Editorial Director Jedidiah Bracy recently spoke with both Dempsey and Carlin about the latest trends in cybersecurity, including best practices in dealing with ransomware, the significance of the new SEC disclosure rule, cybersecurity provisions in state privacy laws, trends in FTC enforcement, the recent Biden Executive Order on preventing access to bulk sensitive personal data to countries of concern, and much more. We even hear about the time Carlin briefed the U.S. president on the Sony Pictures hack.
For those following the regulation of artificial intelligence, there is no doubt passage of the AI Act in the EU is likely top of mind. But proposed policies, laws and regulatory developments are taking shape in many corners of the world, including in Australia, Brazil, Canada, China, India, Singapore and the U.S. Not to be left behind, the U.K. held a highly touted AI Safety Summit late last year, producing the Bletchley Declaration, and the government has been quite active in what the IAPP Research and Insights team describes as a “context-based, proportionate approach to regulation.” In the upper chamber of the U.K. Parliament, Lord Holmes, a member of the influential House of Lords Select Committee on Science and Technology, introduced a private members’ bill late in 2023 that proposes the regulation of AI. The bill also just received a second reading in the House of Lords 22 March. Lord Holmes spoke of AI’s power at a recent IAPP conference in London. While there, I had the opportunity to catch up with him to learn more about his Artificial Intelligence (Regulation) Bill and what he sees as the right approach to guiding the powers of this burgeoning technology.
Hard to believe we’re at the twilight of 2023. For those following data protection and privacy developments, each year seems to bring with it a torrent of news and developments. This past year was no different. The EU General Data Protection Regulation turned five, and the Snowden revelations turned 10. From a finalized EU-US Data Privacy Framework, to major enforcement actions on Big Tech companies, to a panoply of new data protection laws in India and at least 7 US states, to the dramatic rise of AI governance, 2023 was as robust as ever. To help flesh out some of the big takeaways from 2023, IAPP Editorial Director Jedidiah Bracy caught up with IAPP Research & Insights Director Joe Jones, who joined the IAPP at the outset of the year. 
After a gruelling trilogue process that featured two marathon negotiating sessions, the European Union finally came to a political agreement 8 December on what will be the world’s first comprehensive regulation of artificial intelligence. The EU AI Act will be a risk-based, horizontal regulation with far-reaching provisions for companies and organizations using, designing or deploying AI systems. Though the so-called trilogue process is a fairly opaque one, where the European Parliament, European Commision and Council of the EU negotiate behind closed doors, journalist Luca Bertuzzi has acted as a window into the process through his persistent reporting for Euractiv. IAPP Editorial Director Jedidiah Bracy caught up with Bertuzzi to discuss the negotiations and what comes next in the process.
loading
Comments (1)

Matthew Palomino

will you guys be doing a podcast on the suprme court decision on the Genetic non-discrimination act

Jul 13th
Reply