At one of their education centers, a teacher works their way through a powerpoint presentation and provides the audience with a checklist of methods that Russian trolls use to deceive readers, including media manipulations, misrepresentations of facts and half-truths, group intimidation (voting down certain people and voting up themselves), and bot profiles.
On another slide is a diagram of a Twitter profile page showing them to look for things like stock photos, post volume, and lack of personal information. And at the end of it is a lesson on deepfake technology . . . back in 2014.
If the US took media literacy as seriously as Finland did, who knows where US citizens could be now in identifying propaganda? By a large degree this happens on an almost instinctive level on social media and during social media interactions. The catfish-era baked a certain degree of suspicion into people so they know not to take people's claims of identity and capabilities at face value, and to ask for proof. But there's some kind of disconnect in people on applying this standard universally, and not to get baited into deliberately misleading conversations with trolls and even the well-intentioned but misinformed or ignorant.
One thing that will help is that Google has joined the fight.