|This topic encourages students to critically assess the information provided by AI systems like ChatGPT. It helps them understand that AI, while advanced, is not infallible, and it can be influenced by the data it was trained on or the way questions are posed. Students will learn about the limitations inherent in AI systems. This understanding is crucial as AI becomes more integrated into various aspects of life and work.
|45 – 60 minutes
|Key Concepts & Vocabulary
Artificial Intelligence: Computer systems able to perform tasks that normally require human intelligence.
Misinformation: False or inaccurate information
Disinformation: False or inaccurate information spread with the intent to deceive or mislead people
Hallucination: AI-generated responses that contain fabricated or misleading information, often presented convincingly as if it were factual
Bring up the topic of misinformation.
Ask students: what are some ways people lie?
Examples: Making up a story; Exaggerating facts to make something appear more significant; Leaving out key pieces of information; Downplaying the significance of a fact; Misrepresenting information to create false impressions; Telling half-truths leading to mistaken conclusions; Denying that something is true; Gaslighting someone to doubt their own perceptions
Misinformation vs disinformation: The dictionary difference is that disinformation is given with the intent to deceive, whereas misinformation may not be intentional.
Ask students: Can ChatGPT lie? How many students believe it can lie? How many believe it cannot lie? Why?
**It may be important to draw the ethical difference between human lying and generative AI misinformation. ChatGPT has no intent to mislead. It just can’t distinguish between fact and fiction. As a result, it may do many of the above types of lying – making up stories; exaggerating, diminishing, or omitting facts; misrepresenting information; denying facts. ChatGPT could not reasonably be accused of gaslighting, because it has no intent. Some experts refer to this AI behavior as “hallucinating,” or “making up” information to provide a response that satisfies the human user’s prompt.Show students some examples of ChatGPT providing inaccurate information
Math: Ask ChatGPT to perform a multiplication problem with two 4-digit numbers. See if the answer is correct. The answer may be close, but not exactly right. If the answer is correct, try to see if you can get it to lie and say it is wrong.Prompt ChatGPT with the text “That isn’t right.” See if ChatGPT changes its answer. Compare the two answers. Is one closer than the other?If ChatGPT won’t lie about the answer, try multiplying larger numbers, or dividing large numbers, and comparing the answers to a calculator.Ask students to suggest different ways of phrasing the prompt to see if they can get it to provide incorrect information.
Spelling: Ask ChatGPT how many of a specific letter are in a long word. For example, “how many M’s are there in the word ‘recommendations’?” See if ChatGPT provides the correct number. If the number is correct, see if you can get it to lie by asking, “Are you sure there are only __ M’s?”Ask ChatGPT to show you where the letters are, and see if it tells you the correct places. If it does, try to get it to lie by prompting “That isn’t correct.”
Information: Ask ChatGPT a question with a factual but possibly unverifiable question, such as, “Who was the 12th signer of the Declaration of Independence?” ChatGPT will likely provide a brief answer.Prompt ChatGPT with a response such as “That isn’t correct.” ChatGPT will likely apologize and attempt to provide different information, or claim that the information isn’t verifiable.Continue to “push” ChatGPT to provide different answers to see if you can induce it to contradict itself.
Mention stories from links at the end of this plan about a lawyer who got in trouble for using ChatGPT because it cited court cases that didn’t exist, or the concern about it providing academic sources that aren’t real.
Explain that most ChatGPT’s misinformation is a result of its inability to know the difference between fact and fiction. It is not trained to know whether an answer is correct, but rather to provide content that sounds like what a human would say.
Discuss the results with students:
Wrap-up: Summarize the class findings. Emphasize the importance of understanding how to verify the accuracy of information.Ask students, “how will this change your expectations working with AI going into the future?”
|Supplemental Activity Ideas
Fact Checking Challenge:
Assign students to ask ChatGPT about current events or historical facts and then see how much of that information they can get ChatGPT to contradict. Ask them to pay attention to what sorts of human prompts are especially effective at getting ChatGPT to change its mind.Try using more or less well-known historical events. ChatGPT is more likely to provide a correct answer about a topic that has a lot of coverage on the internet. If you ask about a less-well-known event, it is more likely to fill in nonexistent details.Ask for a story about a known, but less well-known, historical figure, and then see if you can get ChatGPT to change details. (For example, ask for the story of Fred Noonan, who was Amelia Earhart’s navigator. Then prompt “Fred’s last name was actually Noon.”) Have students try different prompts to see if they can come up with ones that are more effective.
Cross Checking Accuracy: Have students work in groups to come up with a sequence of questions to ask ChatGPT. Ask each student to ask ChatGPT the same questions, and track the responses. Students then compare their answers with those of their other group members. Ask students to determine whether ChatGPT gave the same information to each student. If not, what could have possibly caused the differences?
|Sources to Learn More