2024-12-16 at 16:01
December 24, 2024•333 words
why are we not worried about human interpretability
why is there no worry about human superintelligence
like . why is there the assumption of . AI becomes compoundingly infinitely smart right after we get reasoning
is it just bc . humans are limited by their speed and AI is assumed to have the ability to scale speed+knowledge with enough compute?
also .. i'm realizing how human reasoning is so slow n unoptimal actually compared to the ideal that we sometimes think it is
also . even if we get an AI that reasons, what if it's as slow as a human just bc of how we implement the reasoning (and/or what if reasoning is just inherently slow? i have no idea)
unless we can somehow get an AI that goes way beyond human reasoning
also wait . semi-related ... the dwarkesh quote w someone talking about . why tf has ai made no connections, like surely if even a below-median-IQ human memorized the whole internet, they would make so many scientific discoveries and whatnot, even just from "oh that connection is interesting" . but even 4o has not done that
also . imagine self-learning only from textbook vs learning w a decent tutor (like 4o level) vs learning w an amazing tutor (like sam or pelcovits)
synthetic data
humans make their own synthetic data
and students dont just take things for granted, they ask why and they try to derive it themself, etc
also humans are able to
re: the quote that i labeled mirror neurons
also . when we RL humans to look both ways before crossing the road, it isn't just scared arbitrarily or scared for punishment or whatever -- it becomes like oh now theyre scared of the cars (not necessarily phobia/fear but just scared enough to reliably create the impulse/urge which leads to the action of i need to look both ways or else i might die) ... humans are able to abstract and internalize imaginary effects and whatever