Opinion: Ethical concerns surround artificial intelligence

Have you seen The Matrix, the 1999 sci-fi movie starring Keanu Reeves?

In the film, we learn that mankind developed artificial intelligence, and after taking it too far, the sentient machines rebelled. Humanity was all but destroyed, as was the world.

Of course, there’s a lot of great action taking place in the film to lead to a satisfying ending for our main human characters, but I want to consider the role of the AI ​​in the film.

With the release of the chatbot ChatGPT late last year, the technology and resulting conversation behind the AI ​​intensified. And ethical questions abound.

Some common ones: Will AI replace human workers? Can AI be entrusted with business tasks? Should content generated by AI be marked as such for reasons of transparency?

Or in the case of The Matrix, when AI machines are eventually programmed to gain sentience, how will they be treated? Would an AI society be accepted? And what would be the ramifications if AI were mistreated?

I’ve been testing ChatGPT for the past few days to see how far the technology can go. I asked him a few questions about ethical concerns and possible consequences.

Ethical responsibility is important in business, but I’m thinking of larger companies that have a strong focus on profits and reporting to shareholders. Are ethical boundaries kept in check in such situations? I asked ChatGPT to get involved.

“It’s important to note that ethical considerations are increasingly becoming an important part of the conversation about AI, and many companies are taking steps to ensure their AI systems are being developed and used in a responsible and ethical manner,” writes ChatGPT. “One way companies are addressing ethical concerns about AI is by investing in ethical AI research and development. Many companies have established teams and departments dedicated to ethical AI research and development, and some are also working to establish ethical AI policies and principles.”

READ :  Will AI systems replace humanities professors? - News

ChatGPT also referred to the Federal Trade Commission’s guidance on ethical use of AI.

While it’s still in its infancy and a far cry from the dystopian view presented in films like The Matrix, we still have to consider ethical issues as we delve into this world. With big investments in AI by companies like Google and Microsoft, the technology seems poised to skyrocket in popularity and value.

The FTC guidance urges companies developing AI to consider regulatory requirements. For example, an AI cannot engage in unfair or deceptive practices any more than a human. Then there is the Equal Credit Opportunity Act.

“ECOA prohibits a company from using a biased algorithm that could result in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or because an individual receives public support,” FTC officials say.

When asked, ChatGPT says it works over a “large neural network trained on vast amounts of text data”.

It adds: “Because AI systems are trained on data, they can recognize bias and perpetuate it in their decisions and recommendations. This can lead to unfair and discriminatory outcomes for certain groups of people.”

Back to the FTC Guidance. The agency’s guidance includes recommendations for companies adopting AI in their operations:

Watch out for discriminatory results. AI needs to be tested for biases in order to weed them out. Embrace transparency. Make it clear to your customers that AI is being used and for what reason. For example, ChatGPT is quick to point out that it is human-developed AI that is not infallible. Tell the truth about how you use data. If a user has to decide to use an AI service, it should simply explain how their data will be used.

READ :  The First Known Copyright Has Just Been Granted To An AI-Generated Image And The Implications That Has

These enterprise guidelines are a good place to start, but we’re only scratching the surface of what AI can do.

As with other technologies, AI requires constant training and adaptation, especially when deployed in a business-to-business or business-to-consumer scenario.

It seems we are in a major paradigm shift in the way technology works. And with that comes endless ethical concerns about how AI interacts with humans.

So I asked ChatGPT for a top recommendation on how companies can use AI ethically as the technology evolves.

“One of the top recommendations for organizations to use AI ethically is to ensure their AI systems are designed and deployed with transparency and accountability in mind,” writes ChatGPT. “This includes being transparent about how the AI ​​works, what data it uses and how it makes decisions or recommendations.”

Springfield Business Journal Digital Editor Geoff Pickle can be reached at [email protected]