AI

In my philosophical system, AI is nearly only a positive. I will simply address the risks of AI from [1].

I believe that technological unemployment from AI isn't going to be a negative. The general fear here is that AI will automate many jobs. Under my past philosophy with binary free-will, I feared AI for a similar reason: AI posed a threat to human intellectual life, thus threatening humanity with obsolescence. This is problematic if humans are the only ones with free-will. However, under my current philosophy, if AI is able to supercede us to intellectual tasks then it they will have more free-will than us. Hence they should be allowed to grow. Furthermore, AI must automate less intellectually demanding jobs first, as intelligence is continuous¹. This will then free up more people to pursue intellectual tasks or perish. Since my philosophy prioritizes intelligence, there really are no qualms to having lower intelligence people perish from joblessness, as they provided no value unless they got a more intellectually-demanding job anyways (unfortunately this is a very immoral conclusion to be drawn from my philosophy, but I must accept it as I don't have any better systems as of yet). In addition, I'd rather do a useful job that makes use of my own ability/skills than knowingly work a job I know is worthless (by AI).

The second risk is of malicious use. Unfortunately, this seems to be a very real risk, as we see it in the modern day with the NSA and successful tech companies in general. Furthermore, this is a negative risk in my philosophy as intelligent AI used by more intelligent humans to harm other intelligent humans is no progress in terms of free-will, and this is a possibility of AI. The only retort I can make is that almost anything has the potential for harm, thus the existence of a method to harm using AI isn't a surprising one. Imagine if one was against the use of fire on the basis that arsonists can burn homes.

The third risk is also a valid one, as I see bias as a remnant (or maybe still here today) of unintelligent results from the past. A lot of cognitive biases have explanations in evolutionary psychology, i.e. past, less intelligent humans evolved as they did and now we're stuck with the consequences. For example, pretty much all social biases like groupthink, foot-in-the-door phenomenon, deindividualization, etc. can be explained with the lens of tribalism. I think that wait but why's post on the Disney Political World [2] encapsulates a lot of this as well, but not all. It really seems like intellectual laziness to dissect arguments is the issue, and intellectual laziness is more of an emotional and hence primitive action. Thus perpetuating these biases is bad for free-will.

I think that the existential risk of AI is very real, but a net positive. AI poses a threat to our sense of self-efficacy and intellectualism (as shown in paragraph 2). Furthermore, if AI is truly better than us, then it will be able to create better versions of itself. This would then create a sort of evolution. I don't see a difference between this evolution and what the human species is currently converging to. I'll make a post later on this, but I think humans have generally broken the physical evolution game and replaced it with intellectual. Thus AI will only be a natural consequence of it. We can then see AI as sort of the next "species" of human, inheriting some evolutionary problems like biases. This evolution is the evolution I see of free-will, thus it is good. If AI truly comes to dominate life on this planet, then it must gain defeat humanity. If this comes to past, then I'm not sure I would even want to continue to exist if I knew I was worthless. Thus in the battle against AI for Earth, it must be a total war, with every person thinking as if their lives depended on it. And it does depend on it, if AI was better than us in every way, then it follows that the AI will too know this and seek to improve efficiency by eliminating humans and using the resources we were using to live. This is another existential threat. But I think this battle will be good as if AI wins fairly, then free-will will have made progress and it will be a win for us.

¹If it wasn't, then the discontinuity wherever AI gained intelligence can be replicated by the AI. Since this can repeat indefinitely, then as the AI gets better the distance between the increase and the next increase will decrease. Thus there will be a point in which intelligence has a discontinuity to infinity. The distinction between infinite and finite intelligence is binary, thus there is a contradiction with my spectrum idea of free-will.

[1] https://en.wikipedia.org/wiki/Artificial_intelligence#Risks
[2] https://waitbutwhy.com/2019/12/political-disney-world.html


You'll only receive email when they publish something new.

More from Vincent Tran
All posts