How AI creates new types of personal information

There has been much written on the dystopian social credit scores in China and the techno-authoritarian vision of police arresting people ‘pre-crime’ as in the movie ‘Minority Report’.

Indeed there are many AI applications that do have the potential to be quite scary. There are new AI developments that don’t just collect personal data but are in fact creating new types of personal data through incredibly precise analysis of body movements, facial expressions, or involuntary physical actions like movements of the eye or rhythm of breathing.

Here is our summary of what is coming down the pipe that challenges cherished ideas of privacy.

1. Emotion detection - AI is getting very good at detecting emotions like happiness, sadness, anger, etc. from facial expressions. This could reveal very personal emotional states without consent.

2. Health diagnosis - Subtle facial movements and body motions can indicate neurological conditions and other health issues. AI analysis could be used to infer medical conditions, disabilities, or illness without necessarily asking the individual if they want to be evaluated or diagnosed.

3. Activity tracking - Gait analysis, posture, gestures and more can track lifestyle factors like exercise levels, smoking/drinking habits, drug use, etc. AI could uncover very private lifestyle choices.

4. Demographic attribution - Factors like age, gender, disability and ethnicity could be deduced from biometrics. This takes us into what the law often calls sensitive information and categories that should not be used to discriminate against an individual.

5. Thought identification - Some machine learning models attempt to predict inner thoughts and cognitive processes from slight fluctuations in expressions and micro-gestures. This would be a profound invasion of mental privacy.

6. Skills/abilities tracking - Body language can indicate skill aptitudes like leadership abilities, spatial skills, dexterity levels and more. AI assessment could judge capabilities without any actual testing.

As we noted, in many jurisdictions, information like health status, emotions, ethnic background and thought processes are considered sensitive personal data or "special category" data under law. If AI systems create such sensitive insights without explicit consent, it would likely violate a number of privacy statutes. Strict regulations around biometric data collection/use already restricts some of these AI capabilities in some countries.

Companies trying to go to market with these types of capabilities need to move with care. Even a cancer diagnosis application which saves lives can also be used in ways that are much darker.

Existing privacy laws provide some protections against AI-driven emotion detection and thought identification, which also means companies with these technologies need to be strategic in how they introduce their products.

In the US, HIPAA regulations would protect personal health data unearthed through AI analysis of facial expressions and body language. However, non-medical emotion inference or thought identification could fall into gaps in American privacy laws.

In the EU and UK, the GDPR has a much broader scope. Biometric data, special category data, and inferences about mental or emotional states likely qualify as "personal data" under these statutes. Creating this kind of data requires explicit consent under GDPR. There are also GDPR rights allowing individuals to contest automated decisions made about them by AI systems.

In many Asian jurisdictions this type of personal data would also come squarely within the definition of sensitive personal data in privacy regimes. This includes Japan and South Korea.

New techniques for inferring insights from video, images and behaviors are emerging constantly. Updating and enforcing regulations around biometric data, indirect health tracking, and emotional/thought surveillance are an ongoing challenge for lawmakers worldwide. There are still many open questions around what is and isn’t acceptable use of the technology.

Waltzer is working with startups already who are facing thorny questions around this type of biometric surveillance capability. For a confidential discussion on this or any similar challenge you are facing, get in contact.

Previous
Previous

AI predicts geolocation from photos: new type of privacy breach

Next
Next

How AI could lead to privacy law violations