As+technology+advances%2C+how+will+our+daily+lives+have+to+change+and+adapt+to+keep+up%3F

Ellanor Splinter

As technology advances, how will our daily lives have to change and adapt to keep up?

How will ChatGPT alter our interactions with AI in the future?

January 19, 2023

“ChatGPT, a variant of the popular GPT language model developed by OpenAI, is revolutionizing the field of chatbot technology with its ability to generate human-like text and maintain context over multiple turns of conversation. With its capacity to produce relevant and coherent responses on a wide range of topics, ChatGPT is poised to become a powerful tool in a variety of applications where the generation of natural language text is desired.”
At least, that’s what ChatGPT told us when we asked it to “Write a lede about ChatGPT.” As the chatbot explained, it was developed by OpenAI, a nonprofit artificial intelligence company, founded in late 2015 by Sam Altman, Elon Musk, and others who collectively pledged $1 billion. Released on Nov. 30, 2022, ChatGPT—“GPT” stands for “Generative Pre-training Transformer”—is still in its early stages.

ChatGPT paves the way for the future of AI

Since its release on Nov. 30, 2022, ChatGPT, while still in its beta version, has proven to be an advanced virtual assistant, demonstrating capabilities like quickly translating language and summarizing text. Its versatility outpaces that of other conversational artificial intelligence, such as Siri, Alexa, and Google Assistant, and its popularity has spread quickly. ChatGPT’s potential is endless; it can streamline the tasks of a large workforce by churning out informative emails, math proofs, and even identifying bugs in lines of code. 

By assisting with basic jobs, the chatbot is able to help people avoid doing redundant tasks, allowing them to focus on more creative pursuits. As an experienced coder, senior and Coding Club President Racicth Anasuri believes ChatGPT can help people work more efficiently. He said, “I was concerned because as a coder, ChatGPT can write its own code. However, if coders have a question with code, they don’t have to wait for a senior developer to look at it, test it, and see what’s wrong. ChatGPT can just look at it and work through its previous training data to give a response quickly and solve such problems.” 

What sets ChatGPT apart from other chatbots is its ability to produce incredibly human-like writing and condense complex concepts into easy-to-read text within seconds. Unlike a Google Search that will return many disconnected “hits,” ChatGPT presents information in a logical flow that is similar to how a human might convey information. In this way, ChatGPT has proven to be an effective online teacher. Its ability to complete tasks like writing essays is an effective time saver for organizations like businesses that need to process lots of information quickly. The program can continually adapt to new challenges as it learns from large amounts of data. 

However, ChatGPT’s vast capabilities have created an ethical issue surrounding its various applications. The primary concern is that ChatGPT’s yields will eventually trump those made by actual humans, overpowering the need for human involvement in certain industries altogether. The truth is that while ChatGPT can mimic the artistic tendencies of real people, it is still susceptible to the scrutiny of other technologies, which are capable of identifying plagiarism. People are already making safeguards in response to ChatGPT; an app called GPTZero that can determine whether or not something was written by AI was published about a month after the beta version of ChatGPT was launched. To make it easier for text-checkers like GPTZero to identify plagiarism, OpenAI researcher Scott Aaronson and his team are working towards creating a watermark for ChatGPT-produced content. “Basically, whenever GPT generates some long text, we want there to be an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT,” he said in a blog post that summarized a lecture he gave about AI.

All in all, ChatGPT is not inherently built for misuse. To prevent ChatGPT from helping people with malicious intent, OpenAI has programmed ChatGPT to identify malicious prompts and refuse to answer them. The chatbot will not help its users if they ask for advice on topics  related to harassment and violence. It’s important to note that if someone tells it to carry out an order with the intent of passing it off as their own original work, ChatGPT will do so. The duties it carries out are entirely based on that of its users. Yes, the current model of ChatGPT can be taken advantage of, but it’s still in its early stages of development and more precautions are going to be installed. With the right safeguards in place, ChatGPT can utilize its capability to do a lot of good in the future.

The chatbot, along with other AI systems released by OpenAI, signifies a new generation of language-based models, one that can react to humans’ commands in a more accurate way than ever before. Its technology leaves potential for a new generation of chatbots that can translate and transcribe information in a more coherent way. Conversation-oriented AI systems are already used daily by many—80% of people have interacted with a chatbot at some point in their lives—and it’s only natural their capabilities grow over time.

The incorporation of technology like ChatGPT into society is sure to welcome the creation of more AI-based jobs and an increased interest in tech among students. “The technology is only ever as good as the algorithm and I think there are limits to machine learning,” IT Specialist David Wolf-Hudson said. “I don’t think that especially in very highly specialized areas, computers will ever completely supplant humans…But I think it’s going to kind of take our jobs in a more specialized direction.” 

While still in its beta model, ChatGPT has made history as one of the most advanced chatbots ever released to the general public. It is sure to take society by storm as well as usher in a new age of technology. By opening doors for the future of AI, ChatGPT allows us to get a glimpse of where technology can take us.

About the Contributors
Photo of Karissa Cheng
Karissa Cheng, Staff Writer

Karissa Cheng (she/her) is a sophomore and returning to her second year as a Zephyrite! When she isn’t complaining about being tired all day, she’s...

Leave a Comment

ChatGPT raises ethical concerns that may harm society and future AI

The word “robot” was coined by Czech writer Karel Capek in his 1920 play, “Rossum’s Universal Robots,” where robots grow unhappy working for humans and rebel. Today, robots aren’t exactly taking over the world, but if artificial intelligence continues the unstable path it’s taking now, takeover is bound to happen. 

ChatGPT is an AI with the ability to converse with users, but unlike other AI, it can carry context throughout the conversation by responding to follow-up questions, acknowledging errors, and challenging incorrect assumptions. These abilities enable accessible plagiarism in school, allowing it to spit out mathematical proofs that could let students slack off in classes as well as write code that threatens software engineers’ jobs, all while maintaining a potentially biased model.

When exploring the topic of AI usage, it’s difficult to ignore ownership. Users can ask AI to create something while still claiming it as their own. Consequently, teachers are worried about the use of AI for school work. “I’m already seeing AI creep into my classes. I’ve had students pull out an AI [art] generator [to plan artwork] and copy it,” Edina High School drawing and painting teacher Dalen Towne said. 

With ChatGPT’s capabilities, there’s no doubt that it will be misused by students. A student lacking motivation could easily prompt ChatGPT to write an essay for them. Once the essay is turned in, no one would notice.

Every subject has potential to be exploited by ChatGPT. “I’m concerned about [future coding students] who graduate. [Will] companies hire them thinking that they know how to code and they don’t? How will they perform?” EHS Project Lead the Way teacher Shannon Seaver said. 

ChatGPT raises serious questions surrounding academic dishonesty while also making students dependent on the technology, resulting in failure to learn. Copyright laws need to be created to define this gray area, but until then, teaching students the ethics of using AI is vital. 

ChatGPT needs litigation, especially concerning bias. Trained with a broad data set, it learned to recognize patterns and produce text eerily similar to human writing. If there is any bias in said data, however, that bias could be amplified. OpenAI’s website has an explanation of the steps taken to train ChatGPT, but doesn’t state where the information is obtained or if the OpenAI team was even able to provide accurate data for ChatGPT.

Let’s Talk About Race: Identity, Chatbots, and AI,” a paper published in 2018, explains how chatbots have difficulty discussing race due to the biased content of their databases. This is especially worrisome as 83% of companies say AI is a top priority to assist operations. Impartial data must be provided during training to combat AI’s inaccurate operation when minorities, particularly people of color, are introduced to the system. Unfortunately, there’s no way to prevent ChatGPT from consuming biased content—the only way to stop ChatGPT from forming biases is to completely rehaul the system. It raises the question—how can we depend on such technology as our future, if we can’t trust it to represent humankind?

OpenAI CEO, Sam Altman, suggests people “thumbs down” harmful or biased results to help ChatGPT improve. A “thumbs down” simply won’t cut it. Handing the problem over to users will not solve enough in the long run. “What’s required is a serious look at the architecture, training data and goals. That requires a company to prioritize these kinds of ethical issues,” head of the computation and language lab at the University of California, Berkeley Steven Piantadosi said. OpenAI’s lack of accountability will only encourage future developers to follow in the company’s footsteps, not accurately modeling our diverse world. 

With the rise of ChatGPT, however, “future developers” may not even exist. Concerns that AI could replace human jobs in the future increase daily. Software engineers are finding that ChatGPT, even in its free-to-use stage, can write working lines of code. Though it hasn’t reached the level of intricacy human developers have, it would be unsurprising if AI could in upcoming years, threatening the job security of many.

Creative jobs are at stake too—companies that prefer cheaper labor will focus on AI-generated material rather than that of humans. Another OpenAI program, DALL·E, has rattled the art community for producing AI-generated works, outcompeting artists who spend hours creating their pieces. “They’re taking our work and it’s being put out there for other people to use for far less money. There is no compensation and it also brings up copyright issues that artists are going to have to deal with,” Towne said. 

For those unconcerned about AI and ChatGPT, it’s true that the bot itself is imperfect: a sparse knowledge base after 2021, a tendency to produce incorrect information, and an inability to answer the same question rephrased. Yet, ChatGPT’s utilization of Reinforcement Learning from Human Feedback and continuous updates will allow it and similar programs will dominate the tech industry.

Until OpenAI changes the way they handle ethical issues, ChatGPT’s presence alone leads to misuse. If a bot in its early stages managed to flip the world upside-down, who knows what it could bring in coming years? We must monitor our usage of it by keeping an eye out for what AI will bring and how the world will wield it. If we don’t, an AI dystopia could be a blink away.

About the Contributors
Photo of Lynn-Clara Tun
Lynn-Clara Tun, Section Editor

Lynn-Clara (she/her) is a sophomore looking forward to being the student life section editor on Zeph! She’s overly obsessed with chicken pot pie and...

Leave a Comment

Edina Zephyrus • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

Zephyrus welcomes and encourages our readers to engage in our content through substantive, respectful exchanges. To ensure our comments meet these standards, Zephyrus reviews all comments before publication and does not allow comments which contain profanity, vulgarity, racial slurs, or personal attacks. Any comments that violate these standards will be removed.
All Edina Zephyrus Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *