2024-12-08 at 2220 mimi import
December 24, 2024•517 words
train type 1 w type 2 (as in at the same time, just like how we train the word embedding matrix with the rest of the model), but type 2 is constructed from type 1 tho
one new sorta reason ive thought of that it’s good to train type 1 and type 2 together is . this combined training will also determine how type 2 is constructed from type 1, and otherwise youre setting a sort of demarcation that is probably rly naive, but if u allow the demarcation to happen on its own then it follows how humans learn and it’s able to develop that complexity on its own (sorta like The Bitter Lesson .?)
mirror neurons
like how humans will imagine smth / visualize it / feel it (like those neurons r activated) when u say do xyz or dont do xyz
(idk if mirror neurons r actually related to this but im just gonna label it w this for now)
can we make it learn w the same process
also can we make it more interpretable?
at least to the level of humans, as in . not like the neurons themselves r interpretable, but the ai can tell us their inner thought process like humans do... humans can do this bc we have a sort of.. voice inside our head or at least smth similar, yeah? (TODO: how do do this?)
also note that our articulation of our thought process is not actually 100% accurate btw . bc 1 we're not that good at remembering accurately and we can make up or smudge memories, 2 i don't think we're that good at introspecting cause and effect, even with relatively rational ppl, like we have so many fkn subconscious/emotional biases/filters that just skew our perception so much, 3 it happens rly fast that sometimes we're not even able to see it, 3.1 we take many intermediary steps for granted, 3.1.1 these are already very engrained as subprograms that we dont even consciously execute
wait . when you explain ur thought process and eg youre explaining but at a high level and then someone asks for more detail and asks why u did xyz and you say “i Felt i needed to do xyz” .. that usage of the word Felt / the concept of Feeling is bc youve associated the word with the experience
with LLMs, they dont have that unless we make it learn language in the way that humans learn language
also i think just in general NNs have perhaps not been trained that way at all
also can we make it learn textbooks by making it actually go through the textbooks like a human would
wait even if we make reasoning ai, it don’t automatically get us to continually improving ai bc like . we have reasoning humans and yet they don’t continually improve their reasoning etc
and also some of their talents are in .. Not Reasoning . like regan’s talents or salena’s talents — and just in general, reasoning is a small subset of useful abilities