HRI Privacy

Saturday, October 17, 2020

Conversational privacy still seems like there is a lot of unsolved problems.

Problems

  1. How to timestamp audio and text corresponding to people?
  2. Are Privacy Preserving Phrases (P3) enough to determine privacy in context?
  3. If not, is emotion, intensity, volume, pitch, etc. an indicator of privacy (audio)?
  4. What is the best way to create a baseline model of general privacy vs non-privacy and fine-tune it to each user?
  5. How do we fine-tune the model to each user?
  6. What task do we present to the users in our studies?

Solutions

  1. I am sure there is some tool for this with deepspeech or google api
  2. ???
  3. ???
  4. Deep neural network trained on many private/non-private conversations. Another option could be using privacy scores or tags relating to each context or topic.
  5. We can have a configuration session, or do it during a tutorial. Alternatively, we can have preset privacy configurations depending on the users level of concern for privacy.
  6. Conversation memory bot? A robot playmate? A suite of Alexa apps?

Components

  • Google natural language
  • Privacy vs non-privacy conversation dataset
  • Privacy vs non-privacy context triplets
  • Privacy vs non-privacy DNN or scoring system
  • Context tuples (topic, entity, sentiments, roles, activity, location) (emotion, intensity, volume, pitch)
  • Speech transcription (Deepspeech)
  • Speaker recognition
  • Task software

A photo

researchroboticsprivacynlp

HAI 2020 AI

Automating iphone stuff