TEMPLATE

Thursday, October 1, 2020

Here’s a thought: brains are not computationally limited like computers since they are biochemical. So, the brain is capable of immense parallel processing at the speed of brain signals. This is how it is so plastic and able to be used as a giant graph with so many connections.

How can computers mimic this in a computationally efficient way? We have already seen that GPUs are capable of immense parallel processing, so the main challenge is mimicking the structure and properties of the brain to learn. Neural networks are a perfect example of this in that they mimic visual processing and such.

Problem solving

  • Difficult challenges make everything much smarter. Difficulty is defined by how unrelated it is to existing solutions and can be unintuitive, so solving this creates the relationships which make future ones easier and also help with meta learning.
  • Solutions are sometimes random, intuition, trial and error, or previously seen. Iterating through all possibilities and inferences (reasoning) is also an option.
  • Experimentation allows us to witness the effectiveness of our ideas, and we can reflect to figure out what to do in the future.

Memory

  • Recency bias with memory for some reason. Maybe this is because new experiences shape our brain and may overwrite old changes.
  • Stores all weights, architectures, and maybe a few example inputs.

Reasoning

  • Constructing a story for a thought. Inferences are made to reason through a problem or thought.
  • We can manipulate objects and scenes in our mind.

Intuition

  • Instinctual, fast, and somewhat accurate.

Language

  • Organizes thoughts and allows us to map it in our mind easier.

Perception

Plasticity

  • Everything should affect each other and be able to substitute other parts or have some unique input to add to something.
  • Learning something new would affect meta learning algorithms and weights, how transfer learning can be done, and optionally existing models and weights.

System

  • Solution does not have to be perfectly accurate or robust, we can approximate.
  • Weights, inputs, outputs, features all need to be shared across multiple different graphs and networks.
  • Tasks should be able to be derived from other larger level tasks. Can translate a textual command into a task.
  • It should be able to generate a task and architecture from an input.
  • It should be capable of browsing the internet for structured and unstructured data. It wouldnt consider it a ground truth, but it would use visual, textual, and structural inferences to learn things and self evaluate.

Efficiency

  • Neural architecture search, meta learning, and transfer learning to optimize models and learn how to optimize.
  • Self-supervised meta learning (can learn how to learn by generating its own probelem and task formalizations)
researchmlneuroscience

Depression, Anxiety, Loneliness, BU

TEMPLATE