On one side of the conflict, generative AI (e.g. ChatGPT) and other forms of artificial intelligence promise massive productivity and corporate profits, on the other side confusion, distrust and the likelihood of the masses losing power and control.
Generative AI is a fundamental technology. It refers to AI that can generate original audio, code, images, text, video, speech and more. AI has become more tangible to the common man and its impact on jobs and life is more visible. Generative AI is making room for itself in the realm of creativity — historically monopolized by humans. The technology uses bulk input (data ingested) and experience (individual interactions with users) to create a knowledge base. It then constantly “learns” new information to generate entirely new and novel content.
Some are calling the ChatGPT-like tools the new frontier for a gold rush. According to research, “AI could take over the jobs of up to a billion people worldwide in the next decade and make 375 million jobs obsolete.” On the other hand, it can generate over $15.7 trillion by 2030. From 2017 to 2022, venture capital investments in early-stage generative AI companies have quadrupled, and growth expectations for investments are significantly higher for the coming years.
The reach and impact of generative AI could be greater than the internet, mobile phones and cloud computing. Its potential is more comparable to the invention of hunting tools, the wheel and the alphabet. It can influence our society and our behavior more than the Industrial Revolution or the Renaissance.
But I wonder if we are ready to take on the challenge.
Machines, which can be used in most industries and functions, deliver novel content and work faster and more competently than humans, challenge human power and social value. The entity that has the speed and capacity advantage, which can get unlimited access to all human-generated information from day one, and grow smarter faster than any individual is powerful.
The existentialist question is why am I here and what is my purpose if I am not working 9 to 5 to earn a living? Would I have to operate the machine in the future and what would I live on?
Elon Musk predicts that AI-driven technologies could power the workforce in the future, saying, “There’s a pretty good chance we’ll end up with a universal basic income or something because of automation.” Does that mean every company in some decades has only one customer – the government? Won’t that challenge the very foundations of capitalism, or at least require an entirely different social safety net?
We are entering an era of the “abnormal” that requires a rethink on both an individual and societal level.
Sam Altman, the maker of ChatGPT, reportedly said the “good case [for A.I.] is just so incredibly good it makes you sound like a nut talking about it.” He added, “I think the worst case scenario is that the lights are out for all of us.”
Some fears are indeed valid and not entirely unfounded, others are rooted in our inability to see a future that is not necessarily an extension of the past.
AI machines learn from past human behavior and decisions (data); they also inherit our prejudices. So if machines can act and learn faster, they may reinforce our systematic biases. The prejudices that lead to fake news and divisions. The prejudices that affect how we judge and treat each other. Prejudices that can fuel wars, famines, racism, sexism and more. So if we don’t face up to our prejudices, we may be looking at a future that is far more divisive as machines act on our behalf.
But should we fear ourselves and our prejudices, or the machine that just replicates them?
Concerned about fraud, schools are pushing back on students’ use of ChatGPT. The Department of Education in New York City, and officials in Seattle, Baltimore, and Los Angeles also deal with plagiarism. Is it legitimate to move away from using Generative AI, or is it time schools got our students to apply their talents and use technology differently?
Some of my colleagues at the University of Southern California did very informal research and concluded that ChatGPT can answer exam questions for undergraduates through high school. The challenge is, if the fundamental questions can be answered by machines, shouldn’t we reconsider what we let students learn and how? If we have cars driving us around, should we still train horses for transportation?
There is no doubt that we need regulations to protect us during these large-scale global changes. Regulations leading us to partner with the machines and not censor their capabilities and promises. We also need to make sure companies are aware of bias and possible rogue behavior by machines.
But most importantly, we need a global shift in consciousness that will give us all the courage to put the past behind us and embrace a transformative future.
It’s time for massive change and growth. A time to think differently about our future and our relationship with machines. Instead of looking at the relationship from a slave and master perspective, let’s look at it from a partnership perspective. Indeed guard rails are required, but machines will only replicate our prejudices, and students will only cheat if we measure them against what they have memorized or against predefined procedures.
Are farmers, ranchers and truckers the new “canaries in the coal mine”? Get ready for Manhattan DA’s Trump impeachment, made for TV: big on ratings but close to law
We should have the courage to let technology take over everyday processes and future routine actions to be coordinated by machines. Then we will have the opportunity to conceive our near future. A future based on our collective mental evolution. A future that offers us the luxury of focusing on innovation and creation. A future we haven’t even imagined or are even prepared for.
The bottom line is that we are entering an era of the “abnormal”. An era that offers a sea change in our evolutionary path – from physical to spiritual. There will be unprecedented challenges to overcome, from how we make a living and receive health care to what we expect from government. From how we buy, sell, travel and learn to how we go about our days, defining intellectual property and seeking legal protection.
Sid Mohasseb is an Associate Professor of Dynamic Data-Driven Strategy at the University of Southern California and a former National Strategic Innovation Leader in Strategy at KPMG. He is the author of The Caterpillar’s Edge (2017) and You Are Not Them (2021).
Copyright 2023 Nextstar Media Inc. All rights reserved. This material may not be published, broadcast, transcribed or redistributed.