HAI 20 notes on knowledge graphs.
Yejin Choi's work
Kahneman's 3 cognitive systems:
- Perception (object recognition, image segmentation)
- Intuition (intuitive inferences on pre/post-conditions, motivations and intents, mental emotional states, what happens before and after?)
- Reasoning (problem solving)
We hallucinate and imagine stories based on our intuition. These may not be true all the time, these are very stochastic and needs context.
Relationship graph of objects and context. Even a caption isn't enough, (why is this occurring, what happened before, what happens after)
You can make inferences on images, videos, audio, and text about the context by asking questions.
Since we need the whole language model, reasoning becomes a generative task (asking questions), rather than a discriminative task because the space of reasoning in language is infinite.
MTurk to crowdsource common knowledge around event prompts with natural lanugage.
ATOMIC: An atlas of machine commonsense for if-then reasoning COMeT: Yejin Choi Social chemistry 101: Learning to reason about social and moral norms
Can try to direct the inferences made by COMET or ATOMIC towards privacy related speculations and reasonings.
Reasons are used to justify oneself and convince others. Intuitive inferences typically guide oneself.
Transformer based seq2seq models
More general discussion
Privacy is fundamental concept that we can abstract just like time, physics, objects, actions, locations… Can we figure out some smarter way to approach this other than the current paradigm of supervised and unsupervised learning?
Maybe there exists some meta learning algorithm which can figure out which model architecture to use for certain scenarios, which dataset, which learning rule, which task…
Latent variable model with some prior and arrows going to the data. Classic probabilistic language models can be framed in this way. Self-supervised learning like BERT or GPT uses very little assumed structure (a generative model). Generative models are obviously going to generate very realistic things since that was how they were structured to approach their task. Reasoning is difficult as it imagines things we have never seen before and are very different from our experience. Imagine self-supervised learning with some sort of common sense prior.
Stronger the language model is, the faster the generalization is.
Analogy and common sense
Common sense is something we all know before we even know language. Multiple stages of knowledge like from adolescence.
How can we imagine what it's like to crash in a car or drown in water without having experienced it?
Intuition and this system 1, system 2 thinking does not actually exist. Learning is very complex. There can be vague intuition or strong intuition, right or wrong, etc…
Conscious vs unconscious reasoning. Emotion based or moral judgements. Intuition could just be fast, unconscious, but can have a lot of computational structure.
There is no real way to assign actual probabilistic scoring, we are inconsistent in scoring? Sometimes we subconsciously or unconsciously understand problems or concepts.