Artificial intelligence (AI) won’t be taking over the world any time soon, if ever. The theory that AI will become an existential threat to humanity have been popularized by many influential people including Elon Musk, Neil deGrasse Tyson, and Stephen Hawking. While these possibilities are interesting to discuss, they concern situations that will almost certainly not occur. While AI and machine learning are progressing at a surprising rate, the future of AI isn’t the dystopian apocalypse that so many have come to fear.
Simply put, AI is the concept of developing computers that are capable of mimicking human cognitive abilities, such as problem solving, planning, learning, and reasoning. Many applications of AI are now being implemented, including machine learning, a subset of AI that often uses statistical techniques to allow computers to learn from data without being explicitly programmed to do so. Computer scientists are using these concepts and techniques as well as others to create human-like computing systems to aid us in various forms, such as voice assistants, advanced search engines, and self-driving cars. The scary thing is that if AI finds itself in the wrong hands, people may use its immense power in devastating ways.
Determining how much of a threat AI will be in the future is quite hard. It has already significantly affected the human species, but we have been exaggerating the potential harm that intelligent machines might have on the globe. If we ever become smart enough to be able to program machines with super-human capabilities, chances are, we will not be foolish enough to grant them the power to outthink us.
In fact, we probably won’t be giving AI machines much power at all. Humans love power, and we’ll do all that we can to maintain control. Leaving an AI with all the big decisions is not something a manager wants to do. Additionally, AI systems will be designed with the primary intention of doing the dirty work that people don’t want to do, such as jobs that require repetitive tasks or exposure to dangerous situations. Machines will be aiding humans, so why would they turn against us?
Also, if someone were to eventually build a generally-intelligent and dangerous AI system deliberately, others will be able to build a second, narrower, and therefore more efficient AI whose only purpose will be to eradicate the first one. If both have access to similar or identical computing resources, the second machine could be designed to be victorious and overcome the first in the same way that a shark or a virus pose threats to humans despite their far inferior intelligence. This idea provides a safe way to deal with potential risks that might occur in the time ahead.
Besides, intelligence isn’t correlated with a craving for power. Jealousy and greed are distinctly human qualities. Robots will almost certainly not gain these qualities, and even though they could have access to almost unlimited resources and information, and act according to lines of code, they will most likely never seek domination.
Ultimately, only carelessness and stupidity will be the cause of our own demise. Elon Musk, Stephen Hawking, and dozens of experts on artificial intelligence signed an open letter on January of 2015 calling for research on the societal impacts of AI, and further efforts will ensure that the theory of technological singularity doesn’t become a reality.
Although there are many reassuring signs, we still should worry about some aspects of AI. There are still real threats that already exist and need attention. AI is evolving rapidly and is radically transforming society. In the near future, expert systems will drastically alter jobs and wealth, inducing unexpected economic inequalities and even reshaping the global balance of power. Also, AI companies are building big data repositories that contain detailed information about us and the social relationships among us, raising concerns over privacy and about the ability to influence popular opinion and thought. This threat is imminent and is difficult to address because it is subtle and less visible than economic inequality.
Equally important is that AI technologies could enable new forms of cybercrime, political disruption, and even physical attacks. AI has the potential to revolutionize the power of bad actors to threaten everyday life. Misuse of this technology might lead to AI being used to create targeted and personalized ads or malicious links that would exercise the idea of algorithmic profiling to increase its effectiveness, or automate disinformation campaigns to influence people’s behavior targeting specific candidates in an election. It could also be used to efficiently find weak points or bugs in a security system for attacks or leaks. The possibilities and risks are real, and although AI could be heavily utilized for cybersecurity to defend against these threats, they still pose a huge risk.
It is urgent that we turn the spotlight onto these inevitable challenges instead of ignorantly arguing about unlikely and unscientific theories. Machines probably won’t take over the world the way the sci-fi stories have it, so we need to look into the real threats that AI might have on mankind, because they will take shape in less time than we all may think.