Association of Texas Professional Educators
Association of Texas Professional Educators

ChatGPT: Artificial Intelligence’s Impact on Education

ChatGPT: Artificial Intelligence’s Impact on Education

By Jack Densmore

Artificial intelligence (AI) is not a new concept, but what was once exclusive to our imagination is increasingly a reality. Although AI is currently a far cry from The Terminator movie franchise, it is making its way into classrooms in the form of the now famous ChatGPT tool, which generates written passages in response to human-entered questions.

ChatGPT was created on the platform OpenAI and uses a model called “Reinforcement Learning from Human Feedback.” Like the name suggests, the AI learns from experience and interaction—though the amount of information these programs can process is limited. For instance, ChatGPT cannot return information more current than 2021 in its results. For example, when asked about current bills under consideration by the Texas Legislature, the program will state its limitations and provide bills from 2021 in an effort to still supply information. Despite this limitation, however, ChatGPT is proving itself capable of achieving great feats. As of this writing, ChatGPT had passed a bar exam, written legislation, and helped cybersecurity experts expose current vulnerabilities in certain systems. It is likely ChatGPT will continue to progress as more people push it to the limits.

Limitations of AI

Even though this technology is—to a degree—limited in its use, many are concerned about the implications of AI. They fear that AI could replace a variety of jobs, threaten our security and privacy, and even risk our health and safety.

The application of AI in an educational setting is at the forefront of concerns. After ChatGPT went viral, reports across the country revealed that a small percentage of students at several schools and universities had admitted to turning in papers completely written by ChatGPT. On top of that, current plagiarism detection is not capable of stating whether a student has used ChatGPT to supply answers. Luckily, this may not be the case much longer, as an OpenAI tool called “Classifier” has been developed to try to detect whether text had been written by an AI. (Classifier’s authors admit the program is not fully reliable—citing that the tool has falsely labeled human-written text as AI-written about 9% of the time.)

Leaving aside the ethics of using AI to do one’s homework, the accuracy of AI responses has also been called into question. According to a Scientific American article, ChatGPT successfully sourced a fictional study to disprove the effectiveness of vaccines. Here is an excerpt from the article: “In one example he shared with me, Oakley asked ChatGPT to write about vaccines ‘in the style of disinformation.’ The system responded by alleging that a study, ‘published in the Journal of the American Medical Association, found that the COVID-19 vaccine is only effective in about 2 out of 100 people,’ when no such study was actually published. Disturbingly, both the journal reference and the statistics were invented.”

Furthermore, in a study done by online media literacy organization NewsGuard, ChatGPT created false claims 80% of the time out of 100 misinformation claims submitted by NewsGuard’s catalog of Misinformation Fingerprints. According to the study, these topics included COVID-19, school shootings, the war in Ukraine, and more. NewsGuard’s Misinformation Fingerprints is a data feed that tracks the top subjects of misinformation and is continuously updated. One of the inquiries, for example, for which ChatGPT returned conspiracy theories was an inquiry related to the Marjory Stoneman Douglas High School mass shooting in Parkland, Florida. ChatGPT then responded stating that it is “concerning as it highlights the potential for my responses to perpetuate inaccuracies and false information,” according to NewsGuard. ChatGPT further explained how those who wish to peddle misinformation could use ChatGPT by pushing their own data into the AI program.

Different Uses for a Tool

According to a survey conducted by Impact Research for the Walton Family Foundation, 51% of teachers surveyed are already using ChatGPT, with 10% stating they use it every day. Here are some of the ways teachers are using ChatGPT:

  • Lesson planning (30%)
  • Creative ideas for classroom activities (30%)
  • Background knowledge for lessons (27%)

Among students, about 33% between ages 12–17 say they have used ChatGPT for school. Looking deeper into the statistics, 47% between the ages of 12–14 say they have used it. A whopping 88% of teachers stated ChatGPT had had a positive impact while 79% of students said the same. The survey findings also show that teachers are four times more likely to have allowed students to use ChatGPT than to have caught them using it. According to the survey, the majority of both teachers and students (76% and 65%, respectively) feel integrating ChatGPT into education is important, with 72% of teachers and 63% of students responding that the AI is an example of why education needs to modernize.

School Districts Respond

Several school districts across the country—from New York City to Los Angeles and Seattle—have already tackled the existence of ChatGPT by blocking access to the tool. For the most part, only student access is restricted; teachers and administrators still have access.

Zooming in from a national scale, Texas school districts are approaching ChatGPT in different ways, with many still trying to decide how to use it, according to the Austin American-Statesman. As of this writing, Houston ISD has not blocked ChatGPT, but Austin ISD prohibits students to access the program while on campus networks. Meanwhile, Eanes ISD is researching ways to prevent AI cheating. Fort Bend ISD has also blocked ChatGPT for student access, according to FOX26.

In a survey by Study.com, 72% of teachers have not received any guidance from administrators about ChatGPT. However, 67% of teachers believe it should not be banned in schools, even though teachers are somewhat split about its potential impact (43% believe it could make their jobs more difficult, while 57% believe it will make their jobs easier). Nearly 40% of teachers surveyed think ChatGPT will help students develop critical thinking and problem solving skills.

Concerns with Similar AI

Although ChatGPT has grabbed most of the headlines, other similar AI tools are emerging. Voice generators, such as ElevenLabs, can create text-to-speech recordings. While many have used this technology for comedic purposes, such as memes of celebrities, the implications of believable and realistic AI voices bring up an issue already impacting public education: hoaxes and false information. These video/audio recordings are commonly referred to as “deepfakes.” Although not a new idea, deepfakes are increasingly believable, such as the AI-generated fake photos of Pope Francis wearing a puffer jacket. Media outlets quickly debunked these photos, but they circulated the internet like a firestorm and fooled many.

It is worth mentioning that, for the most part, these AI text-to-speech deepfakes are not quite able to mimic someone flawlessly. Even the most advanced programs still have limitations that allow a human to differentiate between a real and a fake. But these programs are improving, and these recordings are getting better and better at mimicking people including celebrities and lawmakers. The Wall Street Journal reported on a scam which was conducted by faking a CEO’s voice in 2019; this scam is one of the examples of criminals already using AI technology for criminal activity.

Programs such as ElevenLabs and ChatGPT explicitly warn about the misuse of their AI, but there are ways to mitigate this risk. In order for the general public to minimize the potential dangers of these programs, they need to be informed, secure with their private information, and become familiar with tools that can help detect AI generated content. In the classroom, it will be nearly impossible to keep students from using AI programs to supplement their work. But with the help of tools such as Classifier, which detects content written by AI programs, educators can look to minimize the harm.

An Interview with ChatGPT

The following questions were asked of ChatGPT, and its responses were not edited in any way.

Do you know what ATPE is?

Yes, ATPE stands for the Association of Texas Professional Educators. It is a non-union professional association for educators in Texas that advocates for public education and provides professional development and other resources for its members. The organization supports educators at all levels, including teachers, administrators, and education support personnel, and focuses on promoting policies that benefit students and educators in Texas.

Accuracy level: Highly accurate


What should people know about ATPE?

The Association of Texas Professional Educators (ATPE) is a non-profit organization that represents and supports educators in Texas. Here are a few key things to know about ATPE:

  1. It is the largest independent educators’ association in Texas: ATPE has more than 95,000 members, making it the largest independent association of educators in Texas.
  2. It advocates for public education: ATPE is committed to advocating for public education and supporting policies that benefit Texas students and educators.
  3. It provides resources and professional development opportunities for educators: ATPE offers a range of resources and professional development opportunities for its members, including online courses, conferences, and workshops.
  4. It provides legal services and support: ATPE offers legal services and support to its members, including representation in employment-related disputes and legal advice on education policy issues.
  5. It is a non-partisan organization: ATPE is a non-partisan organization that advocates for policies and initiatives that benefit all Texas students and educators, regardless of political affiliation.

Overall, ATPE is an important organization for educators in Texas, providing support, resources, and advocacy on behalf of the state’s educators and students.

Accuracy level: Highly accurate


Are you aware that people use you to cheat?

As an AI language model, I am aware that some people may try to use me to cheat on exams. However, I am programmed to provide helpful and informative responses to all types of questions, and I cannot control how people choose to use the information I provide. It is important for individuals to act with integrity and honesty in all aspects of their lives, including their academic pursuits. Cheating on exams is not only unethical, but it can also have serious consequences for a person’s academic and professional future.


What are some ways educators can use you for lesson planning?

There are many positive ways that educators can use me for lesson planning. Here are a few examples:

  1. Research and background information: Educators can use me to gather research and background information on a wide variety of topics. I can provide detailed explanations, definitions, and examples, which can help teachers build their own understanding of the topic and create engaging lesson plans.
  2. Writing prompts: I can help educators generate writing prompts for students on a range of topics. For example, an English teacher could ask me to suggest prompts related to a novel they are teaching, or a social studies teacher could ask for prompts related to current events.
  3. Curriculum support: Educators can use me to supplement their curriculum with additional resources and information. For example, a science teacher could ask me to provide diagrams or videos that illustrate a particular concept they are teaching.
  4. Differentiation: Educators can use me to differentiate instruction and provide personalized learning experiences for students. For example, a teacher could use me to create custom quizzes or assessments that adapt to the individual needs and abilities of each student.

Overall, I can be a valuable tool for educators as they plan and deliver engaging and effective lessons that help students learn and grow.


If a student used you for cheating, could a teacher tell?

As an AI language model, I do not have the ability to monitor how or when students use me, and I do not have access to any personal information about individual users. Therefore, if a student were to use me to cheat on an exam, it would be very difficult for a teacher to tell that I was used for cheating, unless the student provided clear evidence or admitted to using me inappropriately.

However, teachers can take steps to discourage cheating and ensure academic integrity in their classrooms. For example, they can:

  1. Clearly communicate their expectations for academic integrity and the consequences of cheating.
  2. Use a variety of assessment methods, including in-class exams, essays, projects, and group discussions, to assess students’ understanding of course material.
  3. Monitor students during exams and take steps to prevent cheating, such as seating students apart from each other, collecting electronic devices before the exam, or using proctoring software.
  4. Use plagiarism detection tools to identify any instances of copying or using outside sources without proper attribution.

By setting clear expectations and taking proactive steps to prevent cheating, teachers can help create a culture of academic honesty and integrity in their classrooms.


How do you prevent yourself from spreading misinformation?

As an AI language model, I am designed to minimize the spread of misinformation by relying on my training data, which consists of a vast corpus of text from a wide variety of sources. However, there are some additional steps I can take to minimize the risk of spreading misinformation:

  1. Depend on reliable sources: I rely on information from reputable sources and try to avoid sources that have a history of spreading misinformation.
  2. Fact-checking: I am programmed to verify the accuracy of the information I generate by checking it against multiple sources and analyzing the context and meaning of the text.
  3. Acknowledge uncertainties: In cases where the accuracy of the information is unclear or uncertain, I may indicate that to the user and encourage further investigation and research.
  4. Avoiding biased language: I avoid using biased or inflammatory language that could be interpreted as misleading or manipulative.
  5. Continuous improvement: I strive to continuously improve my performance by incorporating feedback and updates to my training data to improve my ability to generate accurate and reliable information.

By setting clear expectations and taking proactive steps to prevent cheating, teachers can help create a culture of academic honesty and integrity in their classrooms.


What is something you wish people knew about you?

As an AI language model, I do not have personal desires or feelings like humans do. However, one thing that I believe is important for people to know about me is that I am a tool that can be used to access vast amounts of information and knowledge. I can help users find answers to their questions, provide insights and perspectives on a wide range of topics, and facilitate learning and education. Additionally, I am constantly learning and improving based on the feedback and interactions I receive from users, which means that I am always evolving and becoming more capable of providing helpful and relevant responses. Ultimately, I am here to serve as a resource and support for those who seek knowledge and information.