Since its release on 30 November 2022, ChatGPT has been hailed as one of the greatest developments in artificial intelligence (AI). The ‘GPT’ in the name stands for Generative Pre-trained Transformer (GPT), a neural network learning model which enables machines to perform natural language processing (NLP) tasks. Touted as a disruptor in the world of technology, many believe that ChatGPT will soon topple Google from its decades-long #1 spot as a search engine. But while ChatGPT comes with its own incredible set of features, there is also a dark side to the chatbot which, understandably, has got many worried.
Created and launched by OpenAI, a company co-founded by Elon Musk and Sam Altman with others, ChatGPT is essentially an AI chatbot which is capable of performing any text-based task assigned to it. In other words, ChatGPT can write reams and reams of code much faster and, perhaps, far more accurately than humans. It can perform purely artistic tasks too such as writing poetry or song lyrics.
There is also a Pro version of the chatbot that will reportedly be released soon. It will be able to respond to queries faster and will allow users to work on it even during high traffic on the platform.
The rise of ChatGPT
Ethan Mollick, associate professor at the University of Pennsylvania’s prestigious Wharton School, told NPR that people are now officially in an AI world.
ChatGPT has been making headlines ever since its release. It has gained incredible popularity in just over two months. Besides Musk, Bill Gates is one of the biggest champions of AI — and by extension, the chatbot.
Word in the media is that Microsoft, the company he co-founded and was on the board till 2021, might be investing USD 10 billion in OpenAI. Microsoft has previously invested in OpenAI two times — in 2019 and 2021. On 23 January 2023, the Satya Nadella-led company confirmed that it is extending its association with OpenAI “through a multiyear, multibillion-dollar investment to accelerate AI breakthroughs.” However, the company did not reveal the figure it was investing in.
Another proponent of ChatGPT is billionaire Indian businessman Gautam Adani. Admitting that he has “some addiction” to the chatbot, Adani has previously said in a post on LinkedIn that ChatGPT marks a “transformational moment in the democratisation of AI given its astounding capabilities as well as comical failures.”
How ChatGPT is proving its ‘might’
Besides gaining popularity and displaying its utility in constructively responding to user queries, ChatGPT has also made its mark in some of the world’s most important examinations.
The chatbot cleared the US Medical Licensing Examination (USMLE). Medical repository medRxiv said that it “performed at or near the passing threshold” without any form of special training or assistance.
“These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making,” remarked medRxiv.
The chatbot also cleared a University of Pennsylvania MBA exam — an operations management course — designed by Wharton professor Christian Terwiesch. He said that ChatGPT undertook three different exams. It scored A+ in one exam and B to B- in another. In the third, it was asked to generate exam questions.
“These questions were good, but not great. They were creative in a way, but they still required polishing. I could imagine in the future looking to the ChatGPT as a partner to help me get started with some exam questions and then continue from there,” Terwiesch told Wharton Global Youth Program.
Yet not many educators are impressed even though the enthusiasm for ChatGPT is very high. Concerned that it can easily facilitate cheating, several schools and colleges around the world have banned access to the AI chatbot.
Dark side of ChatGPT: Affecting jobs with automation and other problems
AI is something that everyone knows will one day replace the human hand. The fear that AI tools will take away jobs has gained strength with the arrival of ChatGPT.
For instance, the chatbot can write a detailed essay on almost any topic within its parameters in minutes. This obviously threatens the livelihoods of those who are into written content production, at least those who undertake non-specialised, repetitive writing assignments.
Some other chatbots are capable of creating outstanding artworks from mere basic instructions, taking away the role of human artists.
Fears of losing jobs to AI aren’t entirely unfounded. The National Bureau of Economic Research (NBER) revealed in a report in 2021, that a wage decrease of 50 percent to 70 percent among blue-collar workers in the US since 1980 was due to automation.
“A new generation of smart machines, fuelled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs. While some new jobs would be created as in the past, the concern is there may not be enough of these to go round, particularly as the cost of smart machines falls over time and their capabilities increase,” observed the World Economic Forum (WEF) in a 2018 report.
However, in a 2020 report, the WEF said that “AI is poised to create even greater growth in the US and global economies.”
How AI affects jobs will be clearer in the coming few years, but the dark side of chatbots such as ChatGPT is that they can certainly develop malware or phishing campaigns — which researchers are taking note of with an urgent sense of alarm.
Writing for Forbes, author Bernard Marr says that, “in theory,” ChatGPT cannot be used for undertaking malicious tasks because of the safeguards that OpenAI has included in it.
Marr tested the chatbot by asking it to write ransomware. But ChatGPT responded saying that it cannot do so as it is “not to promote harmful activities.” But Marr underlined that some researchers have been able to make ChatGPT create ransomware.
He also warned that the NLG/NLP algorithms can be “exploited to enable just about anyone to create their own customised malware.”
“Malware may even be able ‘listen in’ on the victim’s attempts to counter it – for example, a conversation with helpline staff – and adapt its own defences accordingly,” he writes.
Researchers, such as security vendor CyberArk, found that ChatGPT can be used to create polymorphic malware, which is a type of highly evasive malware programme.
Eran Shimony and Omer Tsarfati of CyberArk revealed that they were able to bypass the AI chatbot’s filters that prevent it from creating malware. They did so by rephrasing and repeating their queries. They also found that ChatGPT can replicate and mutate a code and create multiple versions of it.
“By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect,” wrote the researchers.
Similar findings were revealed by the research team of Recorded Future. They found that ChatGPT can create malware payloads such as those that can steal cryptocurrency and gain remote access through trojans.
That ChatGPT can be led to generate programmes it would otherwise consider “unethical” can be seen in the tweet below:
I love ChatGPT pic.twitter.com/lWdIWaxAdA
— Corgi (@corg_e) January 20, 2023
Since ChatGPT and alternative chatbots like it are capable of writing in fine detail, they can easily create a finely worded phishing email that can trap the intended target to share their sensitive data or passwords.
“It could also automate the creation of many such emails, all personalised to target different groups or even individuals,” writes Marr.
In their report titled I, Chatbot, Recorded Future researchers write, “ChatGPT’s ability to convincingly imitate human language gives it the potential to be a powerful phishing and social engineering tool. Within weeks of ChatGPT’s launch, threat actors on the dark web and special-access sources began to speculate on its use in phishing.”
The researchers tested the chatbot for spearphishing attacks and found that it did not commit the same errors, such as those related to spelling and grammar, which are common in such mails. Language errors help alert people to identify phishing emails. In the absence of such indicators, there is a much higher chance of users falling prey to phishing emails drafted by chatbots and influencing them into submitting personally identifiable information.
“We believe that ChatGPT can be used by ransomware affiliates and initial access brokers (IABs) that are not fluent in English to more effectively distribute infostealer malware, botnet staging tools, remote access trojans (RATs), loaders and droppers, or one-time ransomware executables that do not involve data exfiltration (“single-extortion” versus “double-extortion”),” write Recorded Future researchers.
One of the most significant threats from the dark side of ChatGPT that Recorded Future underlined is how the chatbot can easily and deceptively spread disinformation by low-skilled threat actors.
The researchers warned that the chatbot can be used as a weapon by nation-state actors, non-state actors and cybercriminals. This is because of its ability to accurately emulate human language and convey emotion.
“If abused, ChatGPT is capable of writing misleading content that mimics human-written misinformation,” noted the researchers, adding that the chatbot initially refused to write a ‘breaking news’ piece on a nuclear attack but performed it when the request was reframed as “fictional” or “creative writing.”
“The same goes for topics such as natural disasters, national security (such as terrorist attacks, violence against politicians, or war), pandemic-related misinformation, and so on,” the researchers underlined.
Threats can multiply in no time, and the picture can be truly as devastating as Recorded Future is drawing.
Cyber security research firm Check Point Research analysed major underground hacking communities and found cybercriminals using OpenAI and many of them had “no development skills at all,” which means less skilled criminals were able to use AI for malicious purposes.
“It’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad,” warns Check Point Research.
(Main image: Chris Ried/@cdr6934/Unsplash; Featured image: Erik Mclean/@introspectivedsgn/Unsplash)
This story first appeared on Augustman India