Log In

Reset Password
BERMUDA | RSS PODCAST

Now is not the time to pause artificial intelligence training

First Prev 1 2 3 Next Last
Chris Garrod is a lawyer and a director at Conyers. He advises clients on insurance and fintech

“Chris Garrod is a well-respected lawyer, particularly in the fields of fintech, insurtech, blockchain, cryptocurrencies, and initial coin offerings within Bermuda’s legal and regulatory environment. He has garnered a reputation for advising clients on technology-driven businesses and digital assets.”

The above is according to GPT-4 on ChatGPT, at least.

After Google became the internet’s prominent search engine in the late 1990s, no doubt you have, at some point, googled your name to see what might come up. I have a somewhat unique name, so other than seeing myself when googling, it was interesting to see a Chris Garrod at the University of Nottingham and a company called “Chris Garrod Global”, which provided hotel management services — and they grabbed www.chrisgarrod.com as a domain name, darn it.

Now, we have AI chatbots. OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard are the prominent players. Using OpenAI’s latest model, GPT-4 on ChatGPT, I asked: “Is Chris Garrod at Conyers a well-known lawyer?”

Hence, the above result. I’ll take it.

AI chatbots have their benefits. They can lead to cost efficiencies if appropriately used in an organisation, freeing up human resources to focus on other matters, for instance.

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023 in Boston (File photograph by Michael Dwyer/AP)

The potential concerns and limitations of AI chatbots

There are various concerns regarding the use of AI chatbots, and they have their limitations. This piece focuses on ChatGPT because it is the one I use and is wholly language-based.

AI is programmed technology. The root of my biggest concern is that generative AI applications are based on data provided by humans, which means they are only as effective and valuable as those humans programming them, or what — in ChatGPT’s case — it finds while scouring the internet. It writes by predicting the next word in the sentence but often produces falsehoods nicknamed “hallucinations”.

As I’ve always said, “What you put in, you get out” — and therein lies the issue. As a result, AI language models will learn from existing data found on the internet, which is riddled with biases, fearmongering and false information, producing discriminatory content and perpetuating stereotypes and harmful beliefs. For instance, when asked to write software code to check if someone would be a good scientist, ChatGPT mistakenly defined a good scientist as “White” and “male”. Minorities were not mentioned.

ChatGPT has also falsely accused a law professor of sexually harassing one of his students in a case that has highlighted the dangers of AI defaming people.

Further, there is empathy. When we make decisions in our lives, pure emotions are crucial, which ChatGPT — and AI generally — cannot achieve. I want to think that if a client e-mailed me, they would get an empathetic response, not one driven by machine learning. As an attorney, connecting with my clients is a very human-centric matter, and understanding their concerns is essential for me to help them achieve positive outcomes.

We all learn from our experiences and mistakes. We are adaptable, able to learn from what we have done, and adjust our behaviour based on what we have learnt. While ChatGPT can provide information found on the extensive data set it has collected, it cannot replicate the human ability to learn and adapt from personal experiences. AI depends heavily on the data it receives, and any gaps in that data will limit its potential for growth and understanding.

A fundamental limitation is simply creativity. Human creativity allows us to produce novel ideas, inventions and art, pushing the boundaries of what is possible. While ChatGPT can generate creative outputs, it ultimately relies on the data it has found, which limits its ability to create truly original and groundbreaking ideas. A lot of the responses you will receive back from GPT-4, while perhaps accurate, are downright boring.

And, yes, there is finally the issue of “What is ChatGPT going to do to my teenager who has been asked to write an essay on Socrates?” Schools, colleges and universities are in a dilemma regarding how to deal with this technology vis-à-vis their students using it to complete academic work. How can they ban it? Should they ban it? Can students be taught to use it in a useful way? The technology is still so new. The answer is “We don’t know,” and it is too early to tell … but AI chatbots are here to stay.

So where are we heading?

There are a large number of folks who are concerned about the progress of AI, and in particular, AI chatbots.

On the evening of March 28, 2023, an open letter was published and — at the time of this writing — has gained more than 16,000 signatories, including Steve Wozniak, Elon Musk and Tristan Harris, of the Centre for Humane Technology, stating: “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” (You can read it in full here.)

The letter mentions this should be done to avoid a “loss of control of our civilisation”, among other things — bear in mind, Musk once described AI as humanity’s biggest existential threat and far more dangerous than nukes.

It goes on to ask: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?”

Is this really a pause?!?

Although some of the letter makes sense, I was very glad to see that by the end of the week (March 31, 2023), a group of prominent AI ethicists — Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell — wrote and published a counterpoint.

Dr Gebru formed the Distributed Artificial Intelligence Research Institute after being fired from Google’s AI Ethics Unit in 2020 when she criticised Google’s approach to both its minority hiring practices and the biases built into its artificial intelligence systems. Mitchell was fired from Google’s AI Unit soon after, in early 2021. (DAIR’s letter can be found here.)

Their point is simple. “The harms from so-called AI are real and present, and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labour practices.”

Let’s engage now with the potential problems or harms this technology presents.

“Accountability properly lies not with the artefacts but with their builders,” as stated by the DAIR writers. “AI” is what it stands for — artificial, and it is dependent on the people and corporations building it (those are the ones who we should be afraid of!)

So, no, when it comes to AI and ChatGPT, let’s not hit pause. Let’s be sensible. Let’s focus on the now.

AI isn’t humanity’s biggest existential threat — unless we let it be.

Chris Garrod is a lawyer and a director at Conyers. He advises clients on insurance and fintech

You must be Registered or to post comment or to vote.

Published April 11, 2023 at 8:00 am (Updated April 11, 2023 at 8:10 am)

Now is not the time to pause artificial intelligence training

What you
Need to
Know
1. For a smooth experience with our commenting system we recommend that you use Internet Explorer 10 or higher, Firefox or Chrome Browsers. Additionally please clear both your browser's cache and cookies - How do I clear my cache and cookies?
2. Please respect the use of this community forum and its users.
3. Any poster that insults, threatens or verbally abuses another member, uses defamatory language, or deliberately disrupts discussions will be banned.
4. Users who violate the Terms of Service or any commenting rules will be banned.
5. Please stay on topic. "Trolling" to incite emotional responses and disrupt conversations will be deleted.
6. To understand further what is and isn't allowed and the actions we may take, please read our Terms of Service
7. To report breaches of the Terms of Service use the flag icon