Redacting personally identifiable information (PII) from images and videos is usually done as an afterthought, by which point many people have already interacted with the footage. As metadata surrounding images, such as date, time, and location, grows and computer vision techniques advance, the minimum information needed to identify a unique individual is reduced as well. On top of that, the difficulty of facial recognition and identity matching are markedly different challenges for humans vs. algorithms. To demonstrate some of these differences, we’ve created the A.I. Eye Exam.
At Numina, we are working on an open-source tool for computationally detecting and redacting image PII. Additionally, we will build validation tools that allow users to compare and evaluate the rigor of the different redaction techniques from the perspective of humans and intelligent agents (machines). There are advantages to approaching this process through automated algorithms. While as humans we cannot choose to “unsee” parts of an image, artificial intelligence can be trained to selectively respond and ignore different parts of an image. In this case, AI can serve as a protective layer between content creators and humans captured in their footage, to guard against bad actors of all sorts.
As more real-world footage is captured, stored, and uploaded, it’s important that any platforms for image and video sharing adopt privacy-forward policies that consider developments in emerging technology (ex. facial recognition, object tracking) and shift the mindset from collecting and storing all data possible to only focusing on what is needed. These tools can make the process simple and automatic for even non-technical teams.
This project was a part of an application to the Knight Foundation’s Ethics and Governance of AI Initiative, for which Numina was a 2018 finalist.
Meet the 66 finalists in the AI and the News Open Challenge