What are the Risks of ChatGPT?
Stephen Hawking, the theoretical physicist, once said, “The development of full artificial intelligence could spell the end of the human race.”
ChatGPT could be one step closer to this theory.
ChatGPT is a conversational language model developed by OpenAI and designed to provide human-like responses to any given prompt. It was created in an attempt to advance the field of artificial intelligence and bring new capabilities to chatbots, virtual assistants, and language translation. According to a UBS study, ChatGPT reached about 100 million monthly active users in January just two months after its release. With capabilities and attention like this, many argue that its risks outweigh its benefits.
ChatGPT has access to a wide range of human information and is free to use. This becomes a cybersecurity problem when personal information, passwords, etc. are within reach of hackers and thieves.
One way attackers can access personal information is through phishing emails—emails that are deliberately crafted to trick recipients into carrying out harmful instructions. These emails can give the appearance of a legitimate source, and often include malware attachments, untrustworthy links, or ask for money and private information.
Phishing emails are the most common type of malware, which is any kind of software that aims to harm the user in some way. ChatGPT makes this process even easier for attackers with its human-like writing style and intelligence.
To mitigate the risks of malware, it’s important to monitor and secure your email and personal information. This can be done through strong, complex passwords, two-factor authentication, antivirus software, and staying informed and cautious.
Additionally, while ChatGPT has access to large amounts of information, it’s subject to the inaccuracies and biases of human-written information.
For example, there have already been instances where racist and sexual biases have emerged and created controversy on social media.
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
— steven t. piantadosi (@spiantado) December 4, 2022
Despite picking up on human bias, ChatGPT is blocked from engaging in conversations that may be deemed inappropriate. Open AI uses its Moderation API to monitor conversations and keep from the mass production of unsafe content. As more instances of bias emerge, the more the API is updated. Therefore, while Open AI is aware of harmful bias and actively trying to stop it, it raises the question about ChatGPT’s true capability without limitations on its software.
we know that ChatGPT has shortcomings around bias, and are working to improve it.
but directing hate at individual OAI employees because of this is appalling. hit me all you want, but attacking other people here doesn’t help the field advance, and the people doing it know that.
— Sam Altman (@sama) February 1, 2023
With its growing capabilities, many wonder how ChatGPT will translate into a classroom setting. Not only can the platform provide quick, easy-to-understand information to students, it can also write research papers, poems, legal briefs, software code, etc.
This then brings up questions of plagiarism and whether or not the content received from ChatGPT is truthful and trustworthy.
Many educators hope that ChatGPT brings to writing and research what calculators bring to math—acting more as a tool than as a replacement. In this way, students have the opportunity to expand their critical thinking and writing skills whether it means using the platform to help form introductory paragraphs for essays, get past writer’s block, or kickstart their research.
While ChatGPT can bring many benefits to education when used properly, it becomes a problem when students heavily rely on the program to get good grades, and when professors are unable to keep up with their students’ understanding of newer technology.
Finally, AI has posed threats to numerous job sectors that may not erase careers, but alter them in major ways. ChatGPT still has a long way to go in terms of reliability, but lower-level jobs that could be replaced with a more efficient option may be the ones that suffer first. The following fields are most likely to be affected:
- Journalism: While ChatGPT is not yet accurate in fact-checking, it has the ability to craft expert-sounding articles. As the platform increases in intelligence, the process of writing a well-written article with sound facts could either become more efficient for journalists or replace some positions.
- Software engineering: With ChatGPT’s ability to write simple code, it has the potential to advance the efficiency of higher-end software engineering jobs. However, many worry that relatively simple software engineering jobs, such as website creation, could be at risk.
- Finance: ChatGPT can impact important financial decisions with its ability to create trading and investment models at much faster speeds than humans. Positions where employees are hired to create such models could be easily replaced with AI.
- Graphic design: Along with ChatGPT, OpenAI launched DALL-E which can automatically create images based on the user’s prompts. Shortcuts in graphic design may benefit some but displace others, and there are still questions about the copyright legality behind the AI’s designs.
In a world becoming more and more dependent on technology, it’s no surprise that platforms like ChatGPT alter the way society functions in terms of safety, education, careers, and more. Perhaps these emerging platforms are one step closer to proving Stephen Hawking’s theory, but for now, it’s important to look at AI not as something to fear, but as something to adjust to.