Nina Bamberg (00:00):
Well, hi, and welcome to our webinar tonight on AI for STEM Educators. We’re really excited to be partnering on this. My name is, we’ll introduce ourselves a little bit more about our organizations in a second, but I’m Nina. I’m the Director of Growth at pedagogy.cloud and Pedagog.ai is one of our main teacher facing projects and we’re really excited tonight to be partnering here with EnCorps to talk about why AI is so impactful, particularly for STEM educators and what the future of this field might look like. And we’re also really excited that we have an educator with us tonight who’s going to tell us all about her experiences working in the classroom since the onset of all this new AI technology. And so we’re really excited to have a great discussion tonight and we definitely encourage participation. So if you have questions at any point or want to jump in on the conversation, we absolutely encourage you to do that.
(01:11)
So just a brief overview of what we’re going to cover tonight. We’re going to start out with just an introduction to what is AI technology, why it’s impactful for education, why we’re hearing so much about it right now, and some of the key AI tools that you may have used, may heard of, and maybe some that you have not heard of or played around with so far. And then we’re going to get into why AI is so particularly impactful for STEM topics and then have an open discussion on educator experiences at this time, and then a little bit of future looking stuff and resources at the end.
(01:58)
But just a little bit quickly, who we are, what we do at pedagog.ai, we’re really focused on providing educators with everything from resources, AI powered tools, professional development workshops in order to help demystify artificial intelligence, make sure that we’re having important conversations around AI, how it can be used responsibly and ethically in the classroom, and making sure that teachers are aware of how students might be using it and also ways that it can help them. So we’re really focused on helping teachers navigate this chaotic time of new AI technologies. And Amy, I’ll pass it over to you.
Amy Kim (02:50):
So EnCorps, we are a US-based nonprofit and what we really, our mission really is to fight inequity in STEM education. What we is, we work with STEM professionals to really get involved in the education system. They can become tutors, and I think we see a couple of our tutors in the group already, but also for them to become educators. So a lot of them become STEM teachers, either single subject or CT subjects. And we actually have one of our educators with us today. And really what we try to do is help STEM professionals adapt to the teaching field. And we wanted to do this seminar because one thing we learned is our educators are very more technical usually than teachers who come from traditional teaching background and they adopt new tools and they also help their students develop new tools. So we really wanted to work with you all to have this webinar. So thanks for being here. Thanks. Maybe Yujia can also introduce herself. Yeah,
Nina Bamberg (03:47):
I was going to say that.
Yujia Ding (03:49):
Thank you everyone. So my name’s Yujia. I was part of the 2018 Orange County cohort from EnCorps, so, many years ago. I started out in EnCorps working with Angel, and that helped me transition over to teaching from industry. So I came from a biotech research background and then I worked in Los Angeles Unified for a couple of years before moving out to Dallas, where now I work in the Dallas Independent School District working teaching dual credit biology, so college biology to high school students. I’ve stayed involved with EnCorps. It’s opened a lot of doors and I love being able to share just what I’ve learned in kind of different strategies and hopefully give you all some resources as we move into this new era of AI in our schools.
Nina Bamberg (04:34):
Awesome. Thank you so much. Okay. So as I said, this section is just a little bit an introduction to artificial intelligence. For many of you, this might be a review, but we always like to include this part because we don’t know what everyone’s background knowledge is. But we might go through this part a little bit quicker because we think some of the later stuff is more interesting and we want to make sure we have plenty of time. But along with the recording of this webinar, I will also post the slides. So anything that we go through kind of quickly, they’ll be there for you to come back and reference. So just generally when we talk about artificial intelligence, what we’re talking about is a pretty big field that is focused on creating machines that are capable of mimicking human-like cognitive abilities such as learning reasoning, problem solving, decision making, things like that.
(05:31)
So it’s a pretty broad field of study and research that has been around for quite some time around this goal of mimicking human thinking in machines. And just a brief overview of how we got to where we are today. So this field really goes back to the 1950s where when we first got the Turing test, which was the test of whether someone could tell if they were talking to a human or a machine. It’s also in the fifties that the terms artificial intelligence and machine learning were coined. In the seventies and eighties we started seeing this technology expand where we got things like natural language processing, which is actually the technology that led to things like chat GPT where we are today. So the reason that you can talk to chat GPT using English instead of using code, that technology goes all the way back to the eighties.
(06:38)
Then we start seeing other more advanced AI technologies in the nineties and 2000s. So things like robotics, facial recognition technology, AI emerging in self-driving cars and medical diagnoses and things like that. And really what has created the impetus for all of these conversations that we’re having around AI today, which is generative artificial intelligence, and that’s the technology that powers tools like Chat GPT, Claude and the image generators that we’re seeing a lot of and things like that. So just to kind of show where we started, where we are now and why we’re hearing so much about AI today.
(07:27)
Also, just to make sure that everyone understands how AI really works. And we also think this is important because we advocate for students knowing how AI works and what’s happening behind the scenes when they’re interacting with the technology, how AI comes to life or what the building blocks are, things called algorithms which use a lot of data. And so they take in a lot of data from all around the internet and then they try to learn patterns and different structures, relationships in that data. And then this highlights how important it is that the data that it’s trained on is accurate and fair because we as users don’t have any input on what this data is or what the design of the algorithms are or anything like that. What we’re doing when we interact with a tool like Chat GPT or an image generator, we’re asking it to create something new based on everything that it learned before. So we’re asking it to make some kind of new decision based on all of this training data. So it’s easy to see how mistakes or biases or inaccuracies or something in the training data can present when we’re interacting with it. And so it’s just important for anyone who’s interacting with this technology, whether it’s an educator or a student, to understand what you’re doing and what you’re asking for when you are interacting with these technologies.
(09:10)
So this brings us, I kind of talked about it already, but generally what we’re talking about when we say generative AI is a particular class of AI models that can create new content based on what it learned before. So it can create texts, images, even music, videos, things like that based on the patterns, the relationships, everything it learned from the data that it was trained on. And I kind of already said this, but just a little bit more detail on how it does that it learns and then when you’re interacting with it, it’s using all that information to create something that is entirely new. So it can look a lot like the original sources, but it is different. And this is where obviously some of the concern comes up in education because AI generated work can’t be detected by a traditional plagiarism detector that searches the entire internet for content to compare it with because it is new.
(10:21)
But something else to note is that these technologies do make mistakes and that is because they are not actually thinking and they’re really just guessing what series of words you wanted them to give you when you asked it a question. And so it can generate things that sound real, sound like facts look a lot like a factual output but might not be. So it’s always important to just remember that fact checking is really important. Not completely trusting the outputs as a hundred percent accurate is really important. And it’s always important to remind students that they can’t a hundred percent rely on what this technology is telling ’em. Before I get into this part, Amy, I saw you nodding. Was there anything you wanted to add there?
Amy Kim (11:19):
Yeah, I think the point about you can use it as a tool, but review the work and also teach your students. I remember talking to an educator and she was saying she can tell when AI wrote the paper versus when the student wrote the paper. There’s still elements that, and I think even as AI gets further developed, I think it’s important for us to teach our students that it’s a tool you can use, but it still needs Tory, your original thought. And I think that’s a good takeaway for us to have.
Nina Bamberg (11:50):
Yeah, yeah. We used a good, or something good came up in a recent PD workshop we did as well, which is the idea of in order to know if what the AI said was good or not, you have to have the knowledge first. So we use the example with teachers, right? If a student got onto Chat GPT and asked it to make a lesson plan, they wouldn’t necessarily know if it was a good lesson plan or not. But someone with experience, years of experience making lesson plans themselves would be able to look at Chat GPT’s output and say, yes, this is good or not, or this is appropriate for this certain group of students or something like that. So remembering that in order for this output to be useful, you need the knowledge yourself first.
(12:43)
Okay, so this part I just wanted to kind of go through some of the names in AI that you might have heard floating around out there and talk through them a little bit. So probably the biggest one right now is Open AI, which is the maker of Chat GPT as well as the image generator Dall-E. And so that one is probably one you’ve been hearing a lot about. Google has its suite of AI tools as well, including a chat bot as well as more and more tools becoming embedded in their Google docs and sheets and slides and things like that called Gemini. And so you might start seeing more and more of those available. Microsoft has Copilot, which is an AI-enhanced search engine. Canva has some cool AI tools. So if you’re already used to using Canva for creating visuals or PowerPoints or things like that, I would definitely encourage you to try out some of their AI features.
(13:56)
And they have one that almost works similarly like a chatbot, but actually in their docs function. So kind of similarly to what Google is doing now with their Google docs. But Canva’s actually had it for a while, and so it’s just nice there because you can kind of talk to it like a chat bot, but then it’s already in an editable document when it gives you its output. You don’t have to do any copy and pasting. So I really like that function. But they also have AI tools that will draft a PowerPoint presentation for you and things like that. Perplexity is one that we really like to talk about because it’s one that cites its sources. So it’s a search engine as well as a chat bot. So it does give you its outputs in a narrative format like the chatbots, but everything comes with links sources and they’re usually pretty good reputable sources, and I believe you have a paid pro account. It’ll search more advanced academic sources as well.
(14:59)
This one with the big A here is Claude made by the company Anthropic, which is of former open AI employees who left and formed their own company saying that they wanted to make a more ethical, transparent chat bot. And so the most recent Claude updates are really good. It sometimes what you’ll find interacting with any of these chat bots, so Claude, Gemini Chat GPT, they might all give you similar answers on things, but their wording might often be different, their phrasing might be different. I tend to really like Claude’s responses and how it speaks, and I’ll talk a little bit in a bit on some recent updates of what’s happened in just the last couple weeks. But a couple weeks ago, Claude’s most recent update was the most powerful of the AI models, but OpenAI came back with its most recent Chat GPT update and it is now the leader once again.
(16:07)
So just to keep in mind that all of these companies are just kind of competing with each other, and so who knows, in a month or so, someone else might come out with something that surpasses what’s currently available. So it’s a constantly changing field. Then in the last column there, I just wanted to include some education specific tools. So the top two are ours and I can show you how they work if we have time at the end and you’re interested. But Socrat.ai is our more student facing, but teacher controlled tool where teachers can create assignments that students can do by interacting with the AI and then teachers can see all of the outputs, the transcripts of everything that the student does. And then on pedagog.ai is where we keep all of our teacher facing tools like lesson plan generators, rubric generators, worksheet things, as well as some email helper tools and other professional development things as well. And then Scratch I included, because I know that might be most relevant to STEM educators, this would be a chance for students to get in there and play around with AI a little bit. They can kind of create games and start seeing a little bit how code works in that platform.
(17:29)
So this is what I just wanted to bring up in case you haven’t heard. So as of just a couple weeks ago, Chat GPT came out with its latest model. It’s called GPT-4o. That O stands for Omni because it can process text, video, images, things like that. So kind of all sorts of different mediums. And probably the most exciting part of this update is that the most powerful model is now free. It requires an account, but it no longer requires a monthly subscription price to interact with what is now the most powerful AI model out there. So just to keep in mind for you, for your students, it’s now free. And if you just go to chatgpt.com, now there’s actually a model that you don’t even need an account to interact with, but it’s not this most recent one, it’s their older 3.5 model.
(18:32)
It’s not as good, it’s not connected to the internet. And so just to keep in mind what you’re seeing and how it might be different, but some relevant features of this new most recent four oh update is that their code interpreter is now free. So that’s something that has been available in GPT-4 before, but now it’s part of this free model and that is actually how chat GPT does math. It does it based on creating a code for the problems that you ask it to solve, but it also means that it can write code, interpret code, things like that. So relevant across the spectrum there. It can also read handwriting. So I actually have an example of that of, well, I’ll show you in a minute, but it can read handwriting now so it can read written math problems. So that is something interesting.
(19:35)
It also, something that they’re rolling out slowly, it’s not completely available yet, is voice to voice communication. So before they had text to voice and voice to text, but now in their app they’re rolling out a voice to voice communication one. And then also now free are there more customized chat bots that they call GPTs. So they’re kind of a more controlled experience of interacting with the AI based on a particular task. And I have some examples here of what you’ll see when you go in there of why this update is relevant, particularly for STEM. So these are some of those GPTs, those customized chatbots that I just mentioned, and you can see a lot of them have to do with things like math and coding. So if you go even this is just the general education category, and even under the top education ones several have to do with math, and there’s a physics one here too as well. And then there’s also a programming that has a lot of things related to coding and different coding languages and things like that. Does
Amy Kim (20:55):
It debug for you?
Nina Bamberg (20:57):
It says it does.
Amy Kim (20:58):
Okay, interesting. I remember about 15 years ago having a debate with a friend who was getting a PhD in computer science about how eventually computers are going to code themselves. And I feel like that day has come.
Nina Bamberg (21:11):
Yeah, I think, unfortunately Priton is not here. He’s definitely used it for coding quite a bit. And then I know another tool that he’s used for coding is GitHub Copilot, and he’s used that I think to debug code. So I wouldn’t be surprised if he’s used one of these GPTs, but I know he’s used GitHub’s AI for that. So this is a handwriting example. I uploaded a math problem and I asked it where did I go wrong, and it actually gave me a step-by-step explanation of where I went wrong. Another thing here though, it’s definitely not perfect with the handwriting. We were trying it one other time and it pointed out the wrong mistake because it thought a two was a seven because it was written a little messily. So just to let you know, it might pick out different errors than the one that you’re actually asking for, but just an example of how it could potentially help a student understand where they potentially went wrong in a math problem. So obviously here the error is that I got the solution that one equals two, which is obviously not true, and Chat GPT helped me figure out that I went wrong at step five. So it really broke it down and pointed out exactly what the error was.
(22:40)
And this next part are the ones I’m going to go through kind of quickly. I just wanted to give you those other ones most relevant to the STEM classes and things like that, but you can reference these later. So these are two of the main bots, Copilot and perplexity that cite their sources when you interact with them, which is again, a nice way to potentially encourage students to interact with the AI with ones that cite their sources. Gemini’s tools are becoming more and more integrated with Google’s other tools. So if you’re already used to using Google’s platforms for things, this might be a tool that you like using. So it can export tables to sheets, it can draft things in Google Docs and Gmail. It also creates alternate drafts. So anytime you interact with Gemini, it gives you three options of what the output might look like. It also does Google image searches. So if you don’t feel like scrolling through a bunch of images, maybe Gemini might give you ones that are relevant.
(23:50)
I should add Chat GPT to this slide. But now Chat GPT, Claude and Perplexity all take file uploads. So you could put PDFs and things like that and ask it to create discussion questions or a worksheet or something like that based on a file that you upload. Chat GPT and Gemini also have public links to their conversation. So if this is something you could use with maybe a co-teacher or a coworker or something, you could send them the conversation that you were having and they could continue it. Or if you have the students interacting with the technology in some way and you want to see exactly what they did, you could have them send you the direct link to the conversation that they had. Also, Gemini has the largest context window, which is basically how much of an input that it can read. So just something to keep in mind in terms of the capacity there. So Gemini now is way ahead of the others on that. Hey, Nina,
Amy Kim (24:59):
A question about Khanmigo, if you have any thoughts on that tool.
Nina Bamberg (25:05):
Oh yeah. So actually I meant to include a screenshot of Khanmigo in here, and I must’ve missed it. The other day we were interacting with this, which is the Khanmigo that’s in Chat GPT. One thing we found is that it actually wasn’t that hard to confuse it. I can see if I can find the screenshot before, but we were kind of purposely trying to be a student that was either being silly or really was confused, but at some point it asked us, we were doing math with it, and it asked us to simplify the fraction five sixths, which you can’t do. So we were like, and then we continued having conversation with it, and then it was like, no, you’re wrong. You can’t simplify five sixths. So there is definitely some ways you can still trip it up, but I did see as of, I think just yesterday or very recently that Microsoft and Khan Academy are partnering to make their teacher facing tools free to all teachers, and they seem to have similar tools.
(26:16)
I haven’t played around with ’em again, I just heard that yesterday. But they seem to work similarly as a lot of the other teacher facing tools out there. So they have things like lesson plan generators and rubric generators and things like that that are available a lot of different places. But yeah, I know Khanmigo has their own thing and then they also have it here in Chat GPT as well. But yeah, no, I mean it’s a good tool if you find it helpful. But yeah, we were purposely playing around trying to confuse it and it didn’t take that much effort to get it off track or to confuse it.
(27:03)
Okay. Some other thing, oh, if you really want to get in depth in Chat GPT, you can actually create custom tools. So those GPTs, you can actually create your own if you really want to control how you’re interacting with the tool. Chat GPT and Perplexity also allow for custom instruction. So if you’re going to mainly be using the tools for say, lesson planning and work related tasks, and you want to go in here and say, I’m a ninth grade science teacher, and here are the subjects I teach and here are the standards that I work on, then every time you interact with it, you don’t have to re-say that every single time. So if you want to, you can actually go in there and kind of tell it a little bit more about you and then you’ll get more relevant responses when you’re interacting with it.
(28:04)
So I just want to close out this section with a more general reflection on some challenges as well as opportunities that we see for AI and education. So some of the challenges being things like an overreliance on the technology, meaning kind of assuming that it can do everything when it really can’t, as well as key privacy and security issues, bias issues, worsening inequality because of who has access to the tech as well as kind of pushback from different stakeholders. But there’s definitely some opportunities here as well for continuous learning, more access to information, personalizing, learning more professional development opportunities, and obviously integrating AI in order to teach students how to interact with it and prepare them for the future.
(29:02)
Why AI is particularly impactful for STEM. I feel like I’ve been talking a lot. I don’t know, Amy, if you want to kind of give thought on this part,
Amy Kim (29:10):
This is an important question to the educators who are here as well. I’d love to hear why you think it’s important for us to integrate AI or use AI tools or even teach kids about AI. I think one thing, it’s rapidly evolving and math and coding and other tasks. So I think workforce readiness, it’s become a calculator, right? It’s everywhere. It’s been used everywhere. I think that’s part of it. I do think STEM classes tends to be a environment where people learn new technology and tools for students, and I think it is a big part of making sure our students know how to use it and also know what to be cautious about. I think when we’re having the preparation call with Yujia, I think the skepticism is also important in the educational settings. I think that’s important. And in the future at AI developers are in schools right now. Our students who are in school right now are going to be those who are going to move on to become developing a new technology. So they need to be familiar with it. And if people have thoughts on why you think it’s important, that’s also love to hear. For the next few slides, we just really want to have an open discussion. We’d love to hear from Yujia, but also other people on a few questions about this topic. So you want to go to the next slide?
Nina Bamberg (30:35):
So our first question is, oh, and there’s a typo. I’ll fix that before I upload the slides later. But what are your main concerns about the impact that AI will have on your classes and students if you want to give your thoughts?
Yujia Ding (30:50):
Yeah, I think I did some preparation on before this, not only just my own reflection, but also reading some papers. Obviously as we know, AI is a really hot topic and we have to be, as educators have that skepticism like Nina and Amy were saying. But I think the other piece of the concern is that students will misuse the tool. I think right now what I see, and I speak from experience being in the classroom, so this year I taught freshmen, so ninth grade students. What I see is this general laziness or lack of want to really put in the effort, and I use the term laziness because of the reactions I get from students when you kind of ask them, well, have you tried something else? Or why did you use this tool? Or did you explore past the little Google AI summary that they give you?
(31:40)
And a lot of the times what you see is the students go, okay, I typed my question into Google, and they copy the Google URL instead of the actual URL to cite their sources. That’s one example of what I think is an extreme laziness in the sense that I think it’s partially that they don’t understand and then partially just like, okay, well complicit. They’re just being complicit. And okay, well here it is. I’m done. Move on to the next topic. I did have some students where they were trying to do a math problem, and so they took some app, they took a picture of the math problem, and then they just looked at the app and just copied the numbers. And I was like, well, are you actually learning how to do it or are you just copying what you see? And they’re like, oh, by copying I’m learning.
(32:21)
I was like, so what does that step actually mean? And they just looked at me with this blank stare. So I think one of the biggest concerns is really just the lack of effort, and I think that ties into this idea of critical thinking. And then the other piece of it is obviously the cheating, which I did find some resources which are linked at the end of this presentation on really how do you make it so that you’re using AI responsibly? Because I think with any technology, you have to have responsible use or else you kind of fall into this, well, we’re just robots or being manipulated or mind games or whatever other conspiracy theories are out there.
Amy Kim (33:02):
I think it’s adoption of any sort of technology. My daughter’s in kindergarten and we’re just doing some basic math, I don’t know, 13 plus eight simple things. And she looked at me and she was like, why do I have to learn this? You all have a calculator on the phone. She’s like, I can just ask Google on our Google home. She’s like, lemme go ask Google. And I was like, no, no. If you don’t understand what is actually behind it, how do you know if it’s giving you the right answer? And I think teaching the fundamental, I think it’s so they can understand if the tool that they’re using is giving them the right answer. And I think that’s just something that we struggle as educators and whenever we adopt any sort of technology. But anyhow, do other educators have any thoughts or main concern, other things that they worry about that you think about before you adopted a AI tool? If there are added to the chat, would love to hear from your perspective as well. Nina, do you have any concerns?
Nina Bamberg (34:04):
I mean, I’ve heard so many different concerns from a lot of different people, but yeah, I don’t know. One that I think about a lot is that kind of fear or concern over the over-reliance or thinking that it can do too much or assuming that it can do more than it can. And yeah, definitely a lot of those concerns. And we’ve heard from not just math teachers who are saying things like, oh, but don’t you just have a calculator? Things like that. But now with this technology, being able to write, from English teachers as well of students saying, do I need to even need to know how to write anymore because this technology can string coherent sentences together and things like that. And really, so I think just reiterating the importance of having those skills across the board and reiterating that the purpose of learning these things is not the efficiency.
(35:08)
You’re not taught how to write so that you could write as fast as a chatbot. That’s not really the point. That’s not really the skills that you’re building. You’re building the critical thinking, the problem solving, all those other things and just so reiterating that you’re not being graded on how fast you do this or it’s not necessarily better because the AI wrote it in two seconds versus the several hours that it might take a human to write it or things like that. So it’s a concern across the spectrum now, but I know that subjects like math have been getting that question for years of why do I need to know this? And so now I think educators across the spectrum are getting that same question now because of this technology.
Amy Kim (35:59):
Yeah, I think one is I’m definitely not an early adopter with AI. So recently English is my second language, so sometimes I know my writing is not great, but I was writing a grant and I was on page seven and I needed to come down to four pages, and then the thought of spending another 10 hours to reduce it, then I was like, you know what? I’m just going to try Chat GPT. I’m just going to put input what I have written and ask it to prioritize certain points and cut it down to four pages and then edited a few more times afterwards. I was like, it’d probably saved me a good five hours of my time. And I think knowing what you want to use it for and what you’re feeding into it and that experience itself, I think it would’ve been very different if I just said, here’s encore’s website, write a grant for me, versus feeding in what I had already built and then put it in and I felt more comfortable with what it had done that versus just relying on the web search database to do it. So I think guiding the students on how and when to use it is probably something that we as educators can do for them.
Yujia Ding (37:08):
I think that’s important to keep in mind is that we have to teach that responsible use of AI, which I think goes into the next question, which I believe was what colleagues are saying and how are they using it? Obviously using it. I’ve heard colleagues making lesson plans. Yesterday I was actually in an associate board meeting and we were trying to come up with a creative name for something and someone just opened Chat GPT and asked for creative names, and it just started spewing out names. So I mean, I know people use it, but I think the difference is, Amy, you touched on it, really teaching students how to use the tool. And so one of the links at the end is actually from, it’s actually based on the US Department of Education and the artificial intelligence and future teaching and learning insights and recommendations. So they have this whole report that they posted, and we have to keep in mind that with artificial intelligence, one thing that makes it, well, I should say one thing that makes it unique is that it’s not just computer science, it’s not just coding, it’s not just technology.
(38:12)
Something that AI, I think a lot of people forget that there’s foundational elements, which is computer science, data science, engineering, but there’s also statistics in psychology behind it as well, because I alluded to this earlier, you’re essentially playing mind games with a robot. But I think for me personally, one of the things that we have to do in the classroom is teach responsible use. And one way to do that is to really create this safe space in your classroom. Teachers talk about it a lot, right? How do you create a safe space? How do you forge that bond with your students? What AI is doing is removing that human element from teaching. I think a lot of the things with AI, it’s very robotic, and I think Nina said, well, why can’t I just put it in the chat and have it write for me?
(38:56)
Well, because you’re taking away the human connection and the human element of teaching. I think a lot of us go into education because there’s a human element to it, right? Doctors, nurses kind of caretaking rules. You go in because there’s that human connection, but when we put technology, you’re removing that complete connection that you can get something out of a student that a robot can’t get out of them. So where’s that balance? But I think one way you do that is in one of the articles I actually linked, it’s called Knowing the ABCs of Teaching in an age of AI. It’s actually written by Dr. Tanya McMartin. She was on my doctoral committee and was a professor of mine when I was a student at Alliant, and she’s really passionate about this topic. She’s a middle school science teacher, and she talks about what you can do in AI, and one suggestion she has is do the ABCs. So she says it’s apply, build and collaborate. How do you apply this? How do you build the tool together? And how do you work with your students as a two-way street? And one of the things that I had, lemme see if I can find it. One thing that you want to do is really allow students to play with the tool and then help them see, well, this tool, is it giving you the right answers and not giving the right answer. And then when you do that, then you have that open conversation and you have them do it in your classroom and show them what that looks like because you reduce that temptation. I think there’s a lot of this curiosity of students, well, I can make my life easier. I can make it easier. So then let me go ahead and just use the tool itself.
(40:25)
I think I say I’m from the dinosaur days. I know some of my friends will be offended. I have friends who are older than me, but I say growing up in the nineties, the internet was new. So what did we do? We went and figured it out. We explored, right? Well, if I’m on the internet and I try to call my dad at the same time to fix the computer, one thing’s not going to work because it was very slow. So curiosity gets the better of us. And I think that’s what’s happening right now is ai. It’s kind of fast tracking this. It’s like an exponential curve instead of this linear growth. So it’s something that we have to think of and consider.
Amy Kim (40:58):
That’s an interesting point. I was trying to explain what is an encyclopedia to my daughter. Again, I went to high school in the nineties. I remember a old dial up search engines were terrible back then. And my daughter was like, huh, you don’t just ask Alexa. And I’m like, oh my God. She was like, what is an encyclopedia though? I’ll be like, there was a whole room full of books you had to alphabetically find. And she was like, what? And I was like, yeah, kids nowadays.
Yujia Ding (41:28):
But I think another thing, I have the questions written. I’m kind of going out of order the questions that also recognize time is a factor. But I do want to point out, while AI is, there’s so many downsides to it, there’s also positives to AI too that we have to remember. And again, it comes back to that responsible use because one thing AI can do, and those that know me, I’m a huge advocate for accessibility and STEM and education in general. AI can really reduce the time that teachers do when they do differentiated teaching, right? Like, okay, if you have a generative kind of SmartBook. So I am thinking of SmartBooks. I was just on a call with McGraw-Hill representatives for the college course adopting curriculum. But they have a SmartBook. You go and you answer questions or you get it wrong, then they give you another question, you get it wrong, you get another question.
(42:14)
It’s kind of the scaffolding that’s already built in. Well, that’s a great way to say, okay, I have three people in my class who I need to work on the basics, and then I have a few more people who I can go into my textbook and give them something more challenging or send them a virtual lab or vice versa. So I think those are important to highlight. Personally for me, this goes back to me being from the dinosaur ages. People say I’m kind of an old soul. I got yelled at yesterday by my friend through text because I bought a set of DVDs and she’s like, why don’t you have streaming? That’s a whole nother kind of issue. I haven’t gotten with the ages and she just face ponged me. But anyways, all that to say, I’ve been a huge, not advocate the opposite, a huge critic of ai because I’m like, oh, it’s going to be cheating.
(43:00)
They’re not learning, not critical thinking. But I think through this opportunity to talk and do the research, I’ve realized there’s benefits to it and it can make accessibility great, but again, we have to do it in the right way. And I think I’ve already vetted the sources. I’ve read all of them. One of the links is actually for Pear Deck, which I’ve used before, and they actually show you how Pear Deck can incorporate tutoring through their AI and how they do that through responsible technology use. So those are some of the articles that I provided that hopefully can provide some of those resources to really support. And I have my papers and my annotations, so I did do my research. I back everything with data, but I kind of just wanted to summarize a lot of the ideas just given the time. So that way we can give our audience some time to ask questions as well.
Amy Kim (43:48):
I’m going to just kind of tag along. I think it kind of goes along with the question that David posted about what kind of replacement will occur. And before my role at Detective here at Encore, I used to be part of a nonprofit that was teach kids how to use AI to solve a problem in their community. So I do think there are definitely probably jobs that will be impacted that won’t look the same as is today. But I think reshifting of that. So then the students learn that AI is a tool and a methodology they can use to solve a problem. So a couple of things, and I know we have a lot of educators here. So there’s a nonprofit called Technovation, and what they do is they teach girls all across the world to use AI to help us solve a problem in their community.
(44:37)
So they build Chatbots, they build natural language processing tools like help with translation, help with kids with disability. So it’s a great way to think about the whole heart of a engineer and a scientist is to solve a problem in their world. And AI is another tool they can use. And I think probably a lot of people are familiar with AI for all. They run programs for high schoolers, they train teachers to adapt how to teach kids how to use ai. It’s a little different than using an AI tool, but that helps them understand what goes into building an algorithm so they have better understanding of what the limitations and what the cons are.
(45:14)
I think Nina mentioned Scratch the folks who built Scratch App Inventor, MIT Media Lab. They have a tool called App Inventor, which is a developed actually apps for your phone and Android phones. And also there’s one called, oh man, I forgot the name. I’ll think of the name. But there’s ones you can build an app that can be an iOS or an Android app, but they have different AI tools they can put in, for example, image recognition tools. So it’s scratch, it’s like little building blocks. Very easy for little kids to understand. So a admin mentor has a good tool. And I think one other thing is as your students graduate high school and go on to college and workforce, I recently came across this nonprofit, it’s called Breakthrough, and it’s really targeted to help students who are typically underrepresented in STEM to learn how to use AI and become contributors in that field.
(46:06)
They train students, African-American student, Latino, like LGBTQ, any sort of underrepresented students typically in stem. And they give scholarships to train and give certificates for those students to become certified to become AI tool developers. I thought I just wanted, and I can also make sure Nina, that I’ll send it over to you. So it can be in the slide deck that gets shared with the educators, but I think it’s important to know that yes, the future is a little unknown in terms of how AI will shift all of our careers, but it’s also important to let our students know it’s one more thing that you can adopt and you can use to further your career as well. Not just focus on the negative that will come, but maybe also helping our students be more workforce ready. Yeah. Okay. Let’s see. Should we wrap up with this question or should we do a Q and a if folks have questions?
Nina Bamberg (46:56):
Yeah, I would definitely be interested to hear what questions everyone has. I’ll just kind of show what’s in these next parts. We probably won’t be able to get to them. So this next part is just kind of examples of interacting with different AI tools for different tasks like lesson planning, creating activities and things like that. And then I just wanted to show at the end where the stuff will go that we were just talking about some of the future stuff. This is the links that Yujia was referring to. So the research that she wanted to share with all of you are linked in here. And then we’ll also include a slide in here that mentions the organizations that Amy just talked about so that you can all reference what they just brought up. So if folks have questions, we would definitely love to hear them. Or if anyone has their own thought or answer to any of these questions that we went through that you want to share, we’d love to hear it. Yeah.
Yujia Ding (48:10):
One real quick. So that last link on that slide, knowing the ABCs of teaching, I read it and I was very impressed at the beginning. It was written by the professor, but I think it’s worth reading if you don’t get to any of the other ones because it’s a great example of the use of a chat bot. So I kind of will have to spoil it to tell you. But the first part of the writing was actually written by Chat GPT, and it’s fascinating. I mean, I’m sure she went through and edited it, but in matter of second. So I think it goes back to the point of like, well, we can just regurgitate this, the have the thought regurgitate sentences for us. But the second half, obviously she wrote it through the research on it. But yeah, definitely something to think about and maybe give your students, right, Hey, look at this. Can you tell if it’s AI generated or not? Because I think we alluded earlier, we can sometimes see, some of us can tell our students, but some of us, I mean, I couldn’t tell. This is a very high level writing. This is published in a, I want to say it’s the science teacher, think it’s the N, national Science Teaching Association, like their published magazine. So something just to keep it,
Amy Kim (49:24):
There are any questions, feel free to raise your hand or add here. I’m also interested to hear if any of your students have brought up AI to you or they were scared or they’re like, what can AI, I’m actually curious to hear from educators on, have their students brought up, I think it’s such a normal dialogue, or maybe they use tools without even thinking that it’s ai. I’m curious.
Nina Bamberg (49:53):
Yeah, we’ve definitely heard people who have interacted with things like tools that have been around for a while like Grammarly or Duolingo and not even realizing that they’ve been interacting with AI this whole time if they’re using tools like that.
Speaker 4 (50:16):
I have one question about student use, and I like the concept that you just mentioned about trying to be very open and honest and create that safe space for kids to talk about what they’re doing and how they’re using it. And in fact, our last ai, we had a previous lesson with this with Priten Shah, and he was very supportive of that use of getting the kids to use the AI and I’m curious about other concepts of boundaries. Let’s say you’re an English teacher or you are a physics teacher or whatever it is about how to get them to break through that laziness that was mentioned previously or to get them to do that critical thinking, even though these tools are very accessible. So I think that also becomes what about being a cop and having to check for this and what do we have to do? So that’s pretty much my whole question.
Amy Kim (51:18):
I think that’s interesting. I think about it as when to introduce that. So just the example that Nina was showing about the handwriting, sometimes those things can misread and misinterpret where the error was, the question that you were showing. I think it’s like maybe AI is not a tool we introduce to the students right away when we’re starting to develop an understanding of different concepts, but more a little bit later where they understand how gravity and the basic equations of neutral laws of motion works and then adopt it to help them build upon it. So I think it’s how you can frame it as an educator is a big part of it. I think we are going to have a constant debate over what should be taught in pen and paper versus digitally. I think that’s going to be kind of the conversation that we educators have constantly.
(52:12)
My kindergartner is doing math on the computer, and I’m like, I don’t understand why I give her a piece of paper. She can write little dots, but I think there’s a other desire for technology adoption that happen so early now. So I think that’s going to be a big part of debate that we all have as educators. I know that doesn’t answer your question, but I think you as an educator can decide, okay, the students have a basic understanding of these concepts, so now maybe we can adapt a tool to maybe make them use it faster or adapt it or build upon it. So I think that’s a little bit of a training that we need to do as educators. I don’t know if Nina and Yujia have a different point.
Nina Bamberg (52:48):
Yeah we talk a lot about AI policies for different classes and innovating assignments for AI. And so the thoughts are if there is a way that you are allowing your students to use AI, making sure that you are showing them what that use is, and if you want them to cite or indicate how they used AI, making sure the guidelines for how you want them to communicate AI use, making that very clear. Because I think there was so much confusion in the beginning and there was this, well, there were some kids that were like, well, I don’t even want to look at Chat GPT because I don’t want to be accused of cheating. And then others not quite knowing how to use it and things like that. So, and then we’re getting a lot of questions from teachers that are like, I don’t know how my students are using it, and things like that.
(53:46)
And so we are definitely encouraging those open conversations of here are acceptable uses here. You can indicate acceptable those uses in your submitted assignments and things like that. But then also recognizing that we’re definitely getting into a territory that there are, we may have to shift assignment types in a way because we can’t a hundred percent control what students are doing outside of the classroom with these AI tools. And that a big part of assessments may have to happen in the classroom, but that doesn’t just mean handwritten essays and things like that. There are other assignment types where you could think of maybe a speech project where you can’t necessarily control how students prepare it, but they still have to do the final output live and or answer questions in the classroom without the help of the AI tools. So, and so we’ve talked a lot about different types of assignments, whether it’s case study type of things, video project simulations, things like that.
(54:56)
Or even some kind of in-class collaborative writing or whatever it is, but just kind of thinking about the types of assignments that we may not be able to 100% grade what students are doing at home. And out of fairness, making sure that we’re not grading students, some who may have used AI, some who may have not, but thinking about how we can rework assignments to say like, okay, however you prepare for this outside the classroom is fine, but you still have to prove that knowledge in the classroom in some way. And that is what they’re assessed on. And so we’re doing a lot of work around or talking a lot about things like that.
Yujia Ding (55:39):
So recognizing time, I think, David, to your point, I think one article that you’ll find really helpful is written by Kip Glaser. She’s a principal at Mountain View High School in Northern California. Yes. I’m from NorCal, so I do know exactly where that is. Played them in softball. He wrote an article for EdWeek, and it’s no AI detection won’t solve cheating. So to your point about how do we circumvent this idea of how do we know they’re cheating or not? Well, one is going back to building that trust, right? Teachers often, more often than not, will assume a students cheating because it doesn’t look natural. Well, students are cheating and using AI because they’re scared of disappointing some of their teachers or their parents. There’s this inherent fear of, well, I don’t want to fail. Well, what I tell my students and to create that safe classroom environment is I tell them the first attempt in learning is failure and fail.
(56:30)
The acronym is first attempt in learning. I had a quote on my wall this year that said, to succeed it’s to fail. And to fail is to succeed. I made that up by myself because to me, it’s really important to remember, in order to succeed, we have to go through failures. We have to learn from our mistakes. Coming from a background of playing softball my entire life, 300 batting average, three out of 10 is successful. You can be an All-American, but you’re also failing 70% of the time. I did research. Every single experiment I did, I had something go wrong, and it drove me crazy. I probably ran the same gel 20 times to get that publication quality, but I learned from my mistakes. And in the article principle, Kip Glaser gives you five guidelines. And the first one is establish a clear AI policy that clarifies what is allowed.
(57:15)
But you do that by establishing it with your students. And so you can use those guidelines that I mentioned, which are linked at the end of the slide deck, and then really creating a space where your students are involved in not only making the policy, but also give your students the chance to use that tool in their projects, right? Use it as a way to do project-based learning and say, okay, I’m going to have you all use this tool. Let’s go explore it. Maybe give them a question at the beginning of a lesson and have them input that into AI and then see what they get out of it and see, is it true? Is it not true because they haven’t learned the content? And use it as a way to show them, even if you use ai, you still don’t know if it’s true or not.
(57:54)
So we have to still learn the material, use that critical thinking muscle in our brain in order to say, okay, even if we use AI, it is a tool. It is another method that we can resort to in a time crunch or to help improve and elevate something, but it cannot be something that we rely on because we can be misled. And again, I think it goes back to the idea of, if you remember teaching why we go into education, if you teach from the heart and you really build those connections, I think that ultimately will give you the right answers and ideas of what you do with the technology.
Amy Kim (58:28):
Yeah, I like the human connection piece. I also like Nina’s idea of we have to adopt how we teach to eliminate this. And then with really the emphasis that AI is meant to be a tool. And I love the idea of asking, give a speech about it in real time. You can’t really just bring them. I know we’re almost, we’re at the end of time. I just want to remember talking to a college admission counselor recently. I was like, how do you all deal with AI when every student can write their college essay using AI? And she was like, now it’s on us to change how we evaluate student application. Because it’s like you are not even tools like turnitin, they’re not always going to know it’s only accurate 50% of the time to catch cheating or not. So it’s true, even tools that are designed to do it. So it is just like, how do you frame the work? What are different ways you can ask questions? What are different characteristics? Look for in your candidates for college. So the world is evolving and it is part of our jobs as educators to evolve with the new technology.
(59:33)
Was there something specific we want to end with Nina, or,
Nina Bamberg (59:38):
I don’t think so. I just, I showed this already where everyone will be able to find some examples and all of that in the slides. Other than that, just to say thank you to everyone and we hope to see you at some other events in the future. Oh, oops. I accidentally clicked on a link. That’s fine. But I’ll have this webinar edited and posted in the same place you access the zoom link by early next week.
Amy Kim (01:00:11):
Yeah. Thank you for the great conversation. Thank you. You Gia, Nina.
Nina Bamberg (01:00:15):
Thank you. Amy, thank you. And you Gia so much for joining us. It was
Amy Kim (01:00:19):
Fun. All right. Bye guys.
Nina Bamberg (01:00:21):
Bye. Thank you. Thank you all.