Luddites and the industrial revolution, cars and equestrians, phones and baby boomers—technology and humanity have historically been at odds.
Most recently, increasingly advanced artificial intelligence (AI) has threatened to take over the jobs of anyone from writers to waiters.
When AI is able to do anything from taking AP exams and generating life-like images to automating the software development process and gaming the stock exchange, it’s no surprise how many are envisioning a Terminator-esque dystopia in humanity’s near future.
However, is it completely fair to assume that humanity is doomed to live under these fledgling robot overlords?
What is AI?
In 1950, mathematician and famed World War II codebreaker Alan Turing posed the question of: “Why can’t machines think?”
He reasoned that, if humans had the ability to use data to solve problems and make decisions, then why couldn’t machines do the same?
By then, movies like the “Wizard of Oz” and books like “I, Robot” had introduced the idea of a talking, thinking robot to the general public. Researchers like Turing took this one step further, developing methods to test the intelligence of machines.
Yet, this work was expensive and the computers of the 1950s were painfully slow, unsophisticated calculators that lacked the capabilities needed to efficiently conduct testing. So, reduced funding and lowered interest caused a drought of research into artificial intelligence.
Until the 1980s, AI was assumed to exist only in science fiction. However, advancements in computing technology made it possible for large amounts of data to be processed by computers and AI was quickly able to use background information to respond to human input like an expert.
These “expert systems” form the basis of today’s AI chatbots and spurred the creation of sophisticated AI systems like the 1997 Deep Blue chess supercomputer, which defeated chess grandmaster Gary Kasparov in a highly publicized match.
In a less savory development, large companies were quick to begin experimenting with the same generative AI technology that defeated Kasparov in order to cut costs.
This caused unions in creative industries to go on strike demanding that AI be banned from a variety of contexts, as it threatens both the quality of work and the livelihoods of workers.
This misuse of AI technology inhibits the growth of humanity as a whole, automating creativity by gleaning the work of real artists from internet databases without credit or concern for adequate compensation.
Despite these challenges, AI still has the potential to change the world for the better.
Why we aren’t doomed
Although AI seems to be rapidly developing away from Turing’s original vision, artificial intelligence has the potential to pioneer new forms of assistive technology, aid in the classroom and advance the field of medicine.
By the year 2050, researchers suggest that 3.5 billion people will be in need of assistive technology. This includes visually impaired individuals, those who require mobility aids or those with cognitive disabilities due to age.
Today, AI has already proven itself useful in reducing the accessibility gap for everyday tasks.
Google’s Project Guideline will help blind individuals run without guide dogs or other aids, a Carnegie Mellon University professor’s robotic suitcase will allow blind travelers to navigate the airport with ease and AI generated captions can help hearing impaired students understand classroom content.
In addition, dementia care practitioners have begun using AI technology to detect patterns in patient brain activity and speech to diagnose declines in cognitive function early on.
AI has even been slated to help create smart prosthetics which respond to the user’s nerve and muscle activity to provide a wider range of movement for amputees.
Lawmakers are also working with subject matter experts to create legislation that would monitor possible misuses of AI and ensure that the technology is used for the betterment of society as a whole, rather than enriching companies.
For example, the state legislature of Illinois proposed a bill that would require AI algorithms used to diagnose patients in medical settings be certified and shown to achieve accurate, unbiased results. In New York, legislation would require advertisers, politicians and film production companies to disclose uses of synthetic media, or generative AI.
However, the film industry isn’t the only job market in for a reckoning. Jobs in the computer science, humanities and education fields are at risk of automation as well.
Even though there are no concrete regulations in place to prevent the use of AI to take over these jobs, the likelihood of this situation happening, according to the United Nations, is slim. AI is much more likely to assist in these fields, providing increased accessibility, efficiency and reliability if the proper policies are adopted.
Privacy is also a large concern with AI, yet the European Union has proposed measures to restrict the abilities of AI-powered data collection algorithms, and a slew of U.S. states have introduced legislation to outright ban AI from sorting through personal data for employment or insurance purposes.
Despite these advancements, it is clear that humanity has barely scratched the surface of AI’s capabilities. Frameworks for regulation must be established in order to expand the use of artificial intelligence throughout both society and everyday life while retaining its role as a complement to humanity’s ingenuity, not a substitute.
So, AI most likely won’t end the world, but that doesn’t mean users should stop saying “thank you” to Siri or ChatGPT. They’ll probably appreciate it in the future.