The intriguing edge cases of privacy in an AI-driven world

Image provided by the author

by Erin Relford, Privacy Engineer at Google

Edge cases in privacy are what keep me passionate about working in technology—particularly those arising from autonomous behavior and AI engagement within our tech products and services. We see this play out already with self-driving cars and food delivery robots. These emerging interactions are rapidly reshaping how privacy engineers approach their work.

As technology grows more sophisticated, the lines blur between privacy, security, safety, and ethics. Often we overlook potential privacy dilemmas that these technologies pose to women and underrepresented groups. As a woman, my mental model has to quickly adapt just stepping into a self-driving vehicle, which can bring a flood of privacy and safety concerns that might not be forefront to designers and engineers working on these products in a traditional car scenario. Add to that the increasing desire for local governments to use AI for policing and public services, which may increase efficiencies but also intensify the complexity of harm.

This is why we need conversations among professionals working within the "cone of protection" – privacy, security, safety, ethics, and legal experts. Collaboration to ensure protections aren't siloed, and address the intersections of these fields.

For example, Privacy engineers need to begin prioritizing data ethics. I recently participated in LA Tech4Good's Data Ethics intensive, drawing insights from sourced materials by influential figures like Timnit Gebru, Catherine D'Ignazio, Lauren Klein, Marika Cifor, Patricia Garcia. This experience underscored the necessity of centering equity and ethics in my privacy work.

We already analyze data lifecycles in products and services, but there’s room for T&S considerations and the creation of SOX like general IT controls for backend systems handling data in research labs.

Now more than ever, privacy engineers working with product teams can integrate ethical questions from the start – motivating proactive conversations about design and intent, rather than scrambling for solutions right before launch.

If you're working within the "cone of protection," I urge you to connect with organizations championing data ethics research that can make a world tethered in technology a safer and more inclusive one for everyone.


About the author

Erin Relford

Erin Relford is an accomplished engineer, creative writer, and content producer with a remarkable 22-year career showcasing deep domain expertise across geotechnical, oil and gas, waste management, media and entertainment, healthcare, robotics, and professional services. Her expertise as a governance, risk, and compliance professional has been instrumental in shaping successful programs across major organizations..

She’s currently a Privacy Engineer at Google, a dedicated contributor to the tech community; she serves as an on-air talent for Women Techmakers, Google, and AnitaB.org; and holds roles as a Black@Core Inclusion Leader, Google Champion, and Intern Mentor, leading diversity, equity, and inclusion efforts.

Previous
Previous

We’re joining the Artificial Intelligence Safety Institute Consortium

Next
Next

Our community speaks out