Three ways to regulate AI right now before it’s too late

Artificial Intelligence has taken the world by storm. Since OpenAI’s ChatGPT launched in November 2022, it immediately became the fastest-growing app in the history of the Web, rocketing past 100 million monthly active users in two months. It has heralded an AI arms race between Microsoft (which invested billions in ChatGPT), Google (which launched its ChatGPT rival, Bard, on March 21) and other tech giants vying for AI supremacy, including Chinese media conglomerate Baidu. 

While this explosion of technology can produce significant upsides, including scientific breakthroughs, it also portends the end of critical thinking as we know it.

Critical thinking is a human skill painstakingly developed and experientially honed over our lives, beginning in childhood. It becomes our personal and constantly refining filter – for our opinions, thoughts, decisions, relationships, habits, everything. 

We are on the cusp of witnessing an entire generation of young people, and upcoming future generations, who will be fundamentally impaired in their nurturing of this vital skill. They will instead be dependent on AI for any task that requires a modicum of analytical thought, as within seconds it can generate coherent, remarkably human-like and uniquely customized responses to virtually any question or request. 

CHATGPT SPARKS CONCERNS ABOUT FUTURE OF EDUCATION: WILL IT IMPACT THE ‘INTEGRITY’ OF ACADEMIC INSTITUTIONS?

Even kids in elementary school are already using AI. Prior to ChatGPT, modern kids had Google and the Web, but they still had to do research and filter information from numerous results. That requires critical thought. Now, AI chatbots have rendered those ruminations obsolete. 

On Feb. 27, Snapchat, which worldwide has over 300 million users ages 13-24, announced its own chatbot powered by ChatGPT. Users now have an AI “friend” they can depend on 24/7 – one that will never criticize them or argue with them. As Facebook and other social media giants soon follow with their AI chatbots, we will see future generations’ emotional growth and interpersonal communications skills stunted irrevocably.

A developer on GitHub recently came up with a way to integrate ChatGPT into WhatsApp, so that people can use the AI chatbot to respond to their friends’ texts. A Business Insider journalist reported that she used ChatGPT to generate texts and replies to her matches on dating apps. Soon, it won’t be uncommon to have two chatbots chatting or flirting with each other on behalf of their human counterparts.

Without ever learning to develop emotional intelligence or conversational skills, humans may soon be reduced to mindless drones whose only function is to press “send.”

According to a survey by the online magazine, Intelligent, in January (just two months after ChatGPT’s launch), 30% of college students used ChatGPT on written assignments, and 60% of those students used it on “more than half of their assignments.” Open AI CEO Sam Altman said the tech is advancing so fast that successfully detecting AI plagiarism is “fundamentally impossible.”

Before our current youth set foot in any workplace, AI will be firmly established there, too. On March 16, Microsoft announced its 365 Copilot, adding AI to Word, Excel, PowerPoint, Outlook and Teams. On March 14, Google announced “a new era for AI and Google Workplace,” explaining that it will embed AI inside Google Docs, Gmail, Google Sheets and Google Slides. This tech will auto-complete your emails and documents, write cover letters and articles for you, and much more, soon replacing most needs for thoughtful work.

NEW AI UPGRDE COULD BE INDISTINGUISHABLE FROM HUMANS: EXPERT

While AI is on the road to becoming vastly more intelligent than humans, it’s also an unintended yet frequent liar. Microsoft’s Bing recently implemented ChatGPT and has been shown to supply wrong answers and confidently share falsehoods, now coined as “hallucinations.” 

In one such case, according to The Verge, “Bing’s Chat GPT even claimed it spied on Microsoft’s employees through webcams on their laptops and manipulated them.” AI-generated deepfakes (digitally altered videos or images that falsely appear authentic) are becoming increasingly sophisticated and difficult to detect. They can quickly rack up millions of views on social media, deceiving countless viewers and leaving us in a bewildered state, unable to decipher fact from fiction.

The technology continues to improve at an exponential and alarming rate. On March 14, just a few months after its original launch of ChatGPT, OpenAI launched ChatGPT-4. When tested on the bar exam and SATs, Cointelegraph reported,”GPT-4 completed many of the tests within the top 10% of the cohort, while the original version of ChatGPT often finished in the bottom 10%.” 

What happens when we reach GPT-5, 6, 7, or 100? Submitting to AI’s purported superior intelligence, people will soon rely on AI chatbots to make all their decisions, from what show to watch, where to live, what job to take, who to marry, and who to vote for. What does this portend for democracy? All hail and bow to the AI overlord–the friendly, personal dictator for everyone’s life. 

AI is not neutral – it can’t be. This technology is now being led by corporations like Google, Microsoft, Meta and others. Executives and developers from these corporations are involved in testing and fine-tuning their chatbots, along with setting their rules and guardrails. Those in charge can direct which sources it sweeps to gather its data, determine its training processes, and can make adjustments to its responses. Every decision by its owners and developers guides the AI’s data inputs and outputs in a certain direction, whether intentionally or inadvertently. 

TECH GIANT SAM ALTMAN COMPARES POWERFUL AI RESEARCH TO DAWN OF NUCLEAR WARFARE: REPORT

No matter your political orientation, we will all be subject to the inherent biases built-in by AI’s owners. If we depend on AI to do our critical thinking, while its own critical thinking mechanisms have programmed biases, then we are doubly in trouble. 

The profit-maximizing incentives of the corporations who own and operate this tech compel them to move as fast as possible. Regulations are urgently needed to keep this tech in check before it’s too late. 

There are solutions that can help – and now is the time to establish them. Three in particular: 

One, create a new regulatory commission consisting of experts of AI and ethics for discreet and enforceable oversight. 

Two, at the federal level, legislate new regulations requiring full transparency of all AI chatbots, fully disclosing sources the AI systems are gathering data from, and which specific rules and guardrails are put in place. 

CLICK HERE TO GET THE OPINION NEWSLETTER

Three, either voluntarily or through mandate if needed, all content generated by AI chatbots must be identified as such. As reported by MIT Technology Review, it is possible to embed hidden “watermarks” into the Large Language Models used by these AI systems, which will help computers detect if content was created by AI.

These solutions won’t solve everything, but they are vital steps that can be implemented now.

The rollout of ChatGPT and its brethren run far ahead of assessments regarding their potential for harm. The necessary cognitive life skill called critical thinking is in the crosshairs. There are times when governing bodies must urgently step in to remediate runaway tech – this is such a time.