🎥Webinar recording

Transcript

Nina Bamberg (00:00):

Welcome to our joint webinar here on the future of Higher Education with ai, and we’ll introduce ourselves in a minute, but just to give you a quick overview of how we’re framing this conversation today. So first we’re going to start out with just kind of a general, what we see as the impact of AI on higher ed, why it’s a particularly important conversation to be having at this context. Then we have some guiding questions, so really just some higher level questions that we’ve been thinking about. And that part will kind of be an open back and forth, and anyone is welcome to jump in and share their thoughts on the questions that we’ve posed there. And then we’ll close out with a couple other topics. So what we see, kind of some considerations on the role of colleges and universities in an AI moderated world, the importance of liberal arts, given the context of ai, how AI can make education more human. So that kind of piggybacks off the liberal arts part, but kind of actually how AI might enhance some of these issues that we bring up. And then some issues that we think higher ed institutions will face in light of ai.

(01:30)
Okay, so my name is Nina, and I’m the director of growth here at pedagog.ai. And we are an education technology company, and our goal is really to develop a whole suite of resources, tools, learning platforms and opportunities for teachers and people who work in education at all levels to really build confidence on around AI and demystify AI for everyone involved. And so our goal is really to think about the creative ways AI can help solve some problems that arise in education and some pervasive issues as well as just helping teachers better understand the technology, better, understand ways that their students might be using it and things like that. And so our goal here is really to provide as many different resources as we possibly can. So if this is your first webinar with us, we do a lot of these, so we would love to see you at some future events, but I’ll pass it over to Chris and I’ll let him introduce himself as well.

Chris Reitz (02:45):

Yeah, thank you, Nina. So yeah, Chris Reitz here. Thank you for teeing me up, Nina. I work in healthcare as my day job and I also teach at Columbia University, which is how Nina and I got connected. And just quick reflection on this, that I’m passionate about fostering young minds and helping people achieve their impact, that their purpose, find their purpose, achieve their impact. I think we all on this call share that. One thing that has surprised me in my career though is I encountered AI in earnest over 20 years ago in my undergrad at Transylvania University, which is well represented on this call. Appreciate that. And it’s funny, at the time I thought, gosh, I would need a PhD, and I just didn’t pursue that, but I stayed really close to it. And here we are over 20 years later for me, and it’s in the mainstream.

(03:48)
It’s now my day job at Elevance Health, which is Anthem. In addition, the teaching that I’ve been doing at Columbia for a decade has now evolved into AI. This will be my third AI graduate course. It’s not technical or the students aren’t technical and we stay pretty high level. It’s more strategic. So just really excited to be here. And these are some of my thoughts that I am thinking about all the time. And I talked to Nina and preaching about these things and we said, let’s do this together. Let’s all in share what we’re thinking about. So thank you.

Nina Bamberg (04:24):

Yeah, no, thank you so much for joining us here today. I’m excited about this conversation. All right. So the first part here, we just kind of wanted to address why AI is particularly impactful for higher ed and why this conversation is so important. And one of those big reasons is that higher ed institutions are often thought of as places where students prepare for access to certain careers. And when AI, I mean this conversation about AI replacing jobs has been around for a while, but has really ramped up since the release of chat GPT and other generative AI tools in the last couple years. And has both institutions, academics and students themselves asking themselves, will these careers still exist? These careers that we’ve often prepared students for that we’ve often focused on. And if they do, will they look different? And if not, how are we pivoting and those kinds of things. And also, are there new career opportunities that are going to emerge because of AI and how are institutions going to need to adapt and adjust to those new realities? And then one final really big question, and I think we’re going to get into this a lot more later on in this webinar, but thinking through whether job readiness will remain the focus of post-secondary education and should it right? Should that always be the main focus? And if not, what should and what’s going to matter in light of the pervasiveness of ai?

Chris Reitz (06:22):

One thing that strikes me about these questions, I think they’re the right questions. I think they’re great questions, but just how difficult it is to answer them. It’s kind of anybody’s guess. Anytime we’re making predictions about the future, there’s a really good chance that we’ll get it wrong. One thing that I think we can have greater confidence about is that the ways of working are changing very rapidly because of AI and other things. And it’s not just the large number of high growth jobs that literally didn’t exist. No one had ever heard of prompt engineering two years ago, but then you go back 10, 20 years and there’s so many data scientists, not that that idea is new, but as a title, I would think that’s pretty new. So it’s not only that there are new jobs emerging as other jobs may become obsolete over time.

(07:14)
It’s also that traditional roles in fields like law marketing management are having to retool pretty rapidly because of technological change. That trend isn’t new. Email eliminated the need for typing memos, for example. And I’m sure people fought that as well. But the point is that technology continues its trend of making us more efficient. And so I would just pause it as we’re thinking as we continue that I think that some of the really critical skills for universities to teach are strategic thinking, communication, creativity. And the reason I feel confident saying that is that these are very human activities that as you add AI, that AI becomes this support and it doesn’t really erode the humanness of those activities.

Nina Bamberg (08:15):

Yeah, absolutely. And I think we’re going to get into more of that idea later as well. But before we go on to our questions, we just kind of along this lines of what universities need to be considering right now. I don’t know, Chris, do you want to speak on these points as well since they were yours?

Chris Reitz (08:38):

Yeah, happy to. Yeah. So what I see is that the future ahead is just so uncertain and it’s going to be more complex. We’re already kind of there. It’s like the future is sort of just happening, unfolding in front of our eyes. I’ve spoke about this at a conference of gifted educators two months ago in Kentucky. But just that the ability to consider multiple perspectives and be a convener and not basically kind of consider things objectively is going to be very valuable going forward, especially as things become more polarized with politics and things like that. We already talked about critical thinking. I meant to mention that said, strategic thinking. And one sort of ability that I think that education gives us is the exposure to a vast array of mental models. So you may not be training to be an accountant, but if you take an accounting class, there’s a lot of great lenses in there, or economics, I mean, you name it.

(09:49)
So this broad exposure to different fields and different modes of thinking as well as creativity are very powerful. And sort of break down some of the prejudice that might’ve been taught before higher ed, for example, and then coupling that with being discerning, having a method for making decisions and testing hypotheses and things like that kind of scientific thinking are just going to be really, really valuable. And then additionally, well, I guess I kind of talked about this, the ability to think in a structured way. Yeah, really important. And I know Dr. Becky Thomas is on the call. I’ve attended a great call of hers and I know that she is focused on AI agents. So I’ll just give a shout out there. One thing I see coming to think about the future here is that AI agents thinking about how things get automated can facilitate collaboration across disciplines and domains increasingly over time.

(10:56)
And the value why that’s important is that there’s collective intelligence that can be leveraged. But today we just call it collaboration and it’s very human. It’s a lot of work. And I mean, frankly, it’s accompanied by some stress or it can be fraught. And if we sort of augment that with ai, there’s coordination that can happen so that maybe many minds are contributing. And then you get this diversity of inputs. And I think that it kind of shapes a lot more of the talk more to come on that, but thinking about facilitating data exchange, harmonizing standards, things like that, connecting data sets that normally wouldn’t overlap.

Nina Bamberg (11:47):

Thank you. So as I said in this next section, we want this to be open, so we’d love to hear all of your thoughts on some of these things as well, and to, assuming that you’ve all been thinking about these impacts of ai, and that’s probably what brought you here today, we’d love to hear your thoughts as well, if you are willing to jump in and share them. And so our first question is, how might AI shape the ways people approach thinking and human cognition? Chris, do you want to go first or? I have a couple thoughts, but

Chris Reitz (12:29):

Oh, that’s a big one, isn’t it? Happy to. Well, one thing is that we humans have some innate needs in terms of integrating knowledge. A lot of it happens subconsciously, such as in dreams and when we’re sleeping, but we need to make sense of the world. That includes everything from filtering information, making decisions about how we sort of navigate the world and what we’re learning. And so to give some quick examples, AI has an opportunity to personalize learning, both identifying needs, but also mobilizing content, connecting a person with things that they’re interested in. Additionally, decision support. AI is very good at making recommendations, predictions, and even simulating outcomes of decisions. It can also do things like uncovered hidden knowledge, help connect with a deeper meaning, deeper understanding of underlying phenomena that are happening, and also identifying complex relationships, things that are sometimes hard for humans to do, or maybe you have an intuition about something, but it’s hard to put your finger on it. AI’s got that bandwidth, that greater breadth of content and thinking that can really compliment things, questions that we have.

Nina Bamberg (14:01):

Yeah, I feel like I also mean when I read this question, I think I also on the potential negative side of things is that we’ve heard a lot from teachers and people we’ve talked to that they’re afraid that people are going to see AI as almost too good at thinking and that it’s going because it’s so good at mimicking certain human thinking abilities that we’ve gotten the initial reaction from people of saying almost like, oh, do I even need to think anymore? Or do I even need to go know these things anymore? And I think that’s been a common concern throughout past technological developments too, whether it’s search engines or anything like that that have proven not to be true. But I do also worry that people will be too inclined to outsource some thinking to AI when they shouldn’t or before they fully develop those thinking skills for themselves. I think it’s important that we still make sure that we’re teaching how to do the thinking first before we’re showing students or before they’re then allowed to maybe use AI to augment or supplement something that they’re doing to either make it faster or synthesize information or something like that. I don’t know if you have similar concerns.

Chris Reitz (15:48):

Yeah, I share that same concern I think, and I know that you’re not representing that as your position necessarily, but just that is a concern shared with you. So a counterpoint could be that humans still learn and do math even though we have calculators. So very simple analogy. Yeah, I mean, I would argue that generative AI is a great facilitator, it’s an incredible thinking partner, but we should retain the job of thinking as people should be doing the thinking, and hopefully it stays that way. I know there’s a lot of science fiction that has us becoming kind of these drones that are just seeking, if you’ve seen Wally for example, it’s a good example. I don’t want to this go down that route.

Nina Bamberg (16:45):

Yeah, no, exactly. Did anyone else have thoughts on this one that they wanted to jump in with or either concerns or positives that they see on how AI might shape the way people approach thinking?

Speaker 3 (17:05):

Yeah, I can share something. I actually think it can help us, but have time to think about things that actually require deep thinking by automating maybe some simple tasks or things that we might spend a lot of time doing that aren’t necessarily growing our cognition or using our cognition. So if it calls research articles together faster or skims articles, then a human still has to think about how those things are connected, or it might lead a connection, but then a human has to think about how does that fit in with my greater hypothesis, right? Because in the current form, the quality of the results that come back is based on the human’s ability to write a good question and a question that actually leverages and helps them get the result that they want. So again, that could change, but at the moment I see it adds a positive in that it could actually promote deeper thinking by allowing all people, but especially students, not to get so bogged down in maybe the tactical weeds of a task that feel more rote. So I’m pretty much all positive on it at this point, at least in its current form, as a way to help educators focus on the things that are really uniquely human at the moment from my perspective.

Chris Reitz (18:39):

I love that. That is quite a vision. Take the mundane off of our plates. Very cool. Well, do you want to go to the next one, Nina? Unless anybody else wants to jump in, feel free.

Nina Bamberg (18:55):

So our next question is how are ways of working evolving, and what are some of the most enduring skills in this emerging era of ai? I know, Chris, you kind of touched on this a little bit already when you talked about ways of working, but did you want to expand at all?

Chris Reitz (19:14):

Yeah, I’m just thinking about, I talked about that coordination layer. I called it AI agents, but some other thoughts on that. I think about the nature of collaboration. This is what I did my master’s on at Columbia, and it’s really about servicing what’s in people’s heads. I mean, factories had assets that you wanted to maximize the output from. And today with knowledge workers, the assets, so to speak, I don’t like calling people assets or resources, but the assets of today’s productive business activities walk out the door every day or rather log off at five post. But just thinking about that coordination that can be done so effectively and efficiently via AI so that people are adding their input and touching different documents at different times. And if you’ve ever been, we’ve all experienced version, I’ll call it version control hell, where you’ve got this important project and somebody loses track and updates an older version, and then it just chaos happens. I’m sure you’ve been there, but why is that still a problem? I think it’s gotten a lot better shared documents and Google Docs is a great example, but yeah, I don’t know. That’s just some quick responses. But yeah, love to hear other thoughts,

Nina Bamberg (20:59):

And I know we get into this kind of later, but on the most enduring skills part of this is I think what we’ve touched on in terms of the skills that are going to remain human. So things like creativity, empathy, those truly human skills that AI can’t replicate are going to be the skills that endure throughout this rise of technology or as this technology continues to advance. And so there’s no way around the fact that AI is going to be better or faster than humans at a range of tasks, and that we’re going to have to get away from thinking that we have to teach students to be great at those skills or better than AI at those skills that might not, that’s probably not achievable. AI might be able to always do it faster regardless, but there are skills that are uniquely human that AI cannot replicate. And so shifting the focus to that, making sure that the humans are good at what the humans are good at is I think another thing that’s going to have a huge impact on the future of school and work.

Chris Reitz (22:37):

I would just add, I think universities and just education more broadly are critical in helping us keep our humanity. To your point, I think about that things that are uniquely human AI doesn’t get tickled by a random thought that pops into its head. It doesn’t wander until we tell it to, Hey, write about this topic, but it’s not really curious to give an example. So gosh, I just hope that doesn’t get eroded. And I think that we’ve got more slides on this, more topics to discuss on this, but I think that there’s something about going to a university in person as well as connecting online just like this, but that human connection keeps that helps us. It’s a solve against kind of eroding what’s uniquely human. So holding on tight to that.

Nina Bamberg (23:41):

Yeah. Did anyone else want to share anything on this question or we can move on to the next one?

Speaker 3 (23:50):

I think it’s covered, but I was just going to say the themes that I feel like you all talked about are emotional intelligence, cultural intelligence, those are bigger topics that, again, right now AI doesn’t do, but it’s still humans who are often recognizing bias or thinking of things like that. So I think that matters when working, being able to see who’s around us, what are differences. We’re a global world, people coming from different perspectives. So I think that can be even more important when maybe we are all sharing a document or we’re not having face, face-to-face. We still need to know those things to then be able to interpret what might be happening in a different format.

Chris Reitz (24:36):

That’s a great point.

Nina Bamberg (24:37):

Yeah, absolutely.

Chris Reitz (24:40):

Yeah, diagnosing dysfunction on a team, you got to notice it first, and unless it’s codified in some way, the AI, I mean it could care less unless it’s directed at that.

Nina Bamberg (24:56):

Very cool. So our next question is what is the role of universities as students increasingly connect across campuses and borders? What unique value do universities offer?

Chris Reitz (25:11):

You want me to take a crack at? Happy to, yeah, so I think that human connection, and we’ve touched on it, I feel like there’s a good, we’ve got kind of a thesis coming together that I think the value of human connection really increases in this coming era. And it’s not just connecting between people, but also connecting within ourselves. Really important for the latter. I think we sometimes have to disconnect, go out into nature, leave the phone behind, for example, or just don’t pick it up. Also, just thinking about how college is such a unique time. I just have such great memories personally, I think I imagine most or everyone does as well, of you’re coming together with people from diverse backgrounds and perspectives and everybody, it’s like a, oh gosh, I don’t know the analogy. It’s a crossroads. So all these backgrounds come together, but then you’re focused on an array of future paths. And so there’s something really interesting about that chrysalis kind of time. And I think my recommendation to universities would be rapid experience around that. Emphasize the value of connecting and sharpening those communication skills. And also don’t ignore the virtual either. Don’t just assume that it’s all about in person. If the ways we’re working is increasingly virtual, then it seems like maybe education should mirror that to a degree. What do others think?

Nina Bamberg (26:59):

I mean, the point you made about social cohesion on the last one too is an interesting point here too, where I think universities definitely play a role in creating social experiences for students and teaching people how to live in community with others and things like outside of their families or wherever they grew up. And so I think that role can’t be understated either.

Chris Reitz (27:35):

Yeah, I agree completely.

Nina Bamberg (27:38):

Alright, we can hop on to the next question so we have time to get to our other points at the end as well. Oh, well this one’s I guess kind of simple and we get to in the next part. So did you want to say anything here or

Chris Reitz (27:57):

Skip? Yeah, I’ll say one thing. So Columbia University’s former president. His name’s Lee Bollinger. He has, I don’t know if it’s his idea or not, but he has sort of proposed this idea of the fourth purpose initiative. The first three purposes are research, education and public service, and he’s kind of pushing that into another domain of impact where universities kind of position the academy, so to speak. And it could be through partnerships, it could be structure. AI could facilitate this, but to address larger societal problems. Some examples of this are you have a teaching hospital that actually is serving people, it’s researching diseases, treating illnesses and things like that, but at the same time, students are learning. I think that’s amazing. Another one is tech transfer offices where research might lead to a breakthrough that could be monetized and putting resources behind that to help amplify the impact. An example that Dr. Bollinger uses is, I’m assuming he’s a doctor, I don’t know he’s an attorney, but an example that he uses is what if you had a small law firm that was attached to a law school and maybe they’re providing free services, maybe not, but it’s a place that students could graduate to and it engages students in the community. Just kind of interesting.

Nina Bamberg (29:39):

Yeah. Did anyone else have any thoughts on this one that they would want to jump in on?

Chris Reitz (29:48):

Another thing I’ll add, not to dominate here, but I think that it’s really important that universities have a policy on generative AI. I don’t think that’ll surprise anyone, but if I know my alma mater, Transylvania is really embracing this, so is Columbia. I will say that when you prohibit generative ai, it really penalizes the most resourceful and creative and engaged students, and it also increases opportunities to cheat, which is really unfortunate. So why wouldn’t you just flip that and open it up and say, Hey, disclose In the class that I teach, we say you can use it, but you have to disclose, you have to share your prompt and then share the output as well, which hopefully is not identical to what you actually submit. Doc Thomas has a thought.

Speaker 4 (30:43):

Yeah, thank you. And thank you for the shout out. Yeah, I think we’re doing well here at Transylvania in thinking about this, I want to pick up on something Chris was saying and maybe think about two different things that a university can do. One is, I think as you’ve all been saying, help our students understand how to use AI in their careers, but I also think that there’s still a role for doing old fashioned multiplication with paper and pencil. To Chris’s point about, yeah, we have calculators, but you still have to learn mathematical thinking. And the analogy I’ve been trying to raise with our students a little bit is you could go to the gym and use a forklift to lift the weight up over and over. That’s not getting what you want to get done, done. And in the same way, maybe you have to write essays without using generative AI for a while before you start. Then easing into generative ai, there’s still a place for doing the mental equivalent of just lifting weights and actually using automation is cutting against the skill you’re trying to develop. And at the same time, let’s not let our students leave here without knowing a lot about how to engage with ai. So I just wanted to throw that in. Thanks.

Chris Reitz (31:57):

Yeah, I appreciate you adding that. There sure. Is something satisfying about proving to ourselves or just the experience of doing hard things and growing in that way.

Nina Bamberg (32:09):

Just to quickly, yeah, I completely agree that I don’t think that the message here is that AI should replace the learning of writing in the same way. Yeah, I mean, the calculator analogy comes up over and over and over again, but the same idea that students are taught how to do it themselves before they outsource the work to the calculator. And I completely agree that generative AI is the same thing for writing, that it doesn’t at all negate the value of humans being able to write. And also, I mean from the perspective of the fact that what generative AI is running on is all human created content right now, the history of human creativity, writing, thinking, and all those things. And we’re using it to, and asking it to synthesize that information and create new things based on it. But we can’t just all of a sudden stop doing our own writing or developing our own voices just because AI can all of a sudden write a coherent paragraph.

Chris Reitz (33:40):

You made me think about a future where we stop having opinions. That’s terrifying,

Nina Bamberg (33:44):

Right?

Chris Reitz (33:44):

Because then you just go along with everything like, oh, it sounds good. I don’t want that.

Nina Bamberg (33:51):

Right. Okay, so I’m going to skip this question for now. I have a section on it later. I put this in here when Priton was supposed to join, and he loves talking about this topic. But in a few slides, we’ll get into a little bit of what we see as the role of liberal arts in a world with AI and why we think what we think in terms of why it’s still important. But first, we’ve touched a little bit on this already, but kind of what we see as the role of universities in an AI moderated world, and it’s really what we’ve been talking about, but this idea of preparing people to be humans, to be humans. So as a result of AI, it will absolutely increase efficiency of certain tasks. It won’t necessarily replace human thinking, but it will increase the need for human connection. And so the colleges and universities are places where students grow as human beings. It’s where they become immersed in different perspectives in different contexts. It’s where they explore different potential future paths, whether that be career further academics or anything like that. I dunno. Chris, did you have anything you wanted to add here?

Chris Reitz (35:26):

Yeah, it kind of goes back to some of the earlier stuff I was saying, but just that the judgment, and Lisa, you mentioned empathy, creativity, those are critical and essential for addressing some of the complex challenges that maybe even haven’t emerged yet. Continuing innovation that’s really driving our economic growth and things like that. And also for problem solving and ultimately to keep ethics in the conversation as things change, we’re increasingly in uncharted territory, so what’s fair? There may not be any precedent for that. Being able to think critically about that kind of thing. And I say this, I tend to say this a lot as time goes on, but just that learning builds resiliency in communities. And how it does that is it empowers people to navigate uncertainty, overcome adversity, and just having competence to thrive, to go up against challenges. Some more things I could say are critical thinking and problem solving, root cause analysis, just being adaptable. And then it also learning kind of fosters resourcefulness by encouraging that creativity. I mentioned resilience can be both a behavior and also kind of a mindset. But yeah, just weaving that all together. I know it’s a little jumbled, but

Nina Bamberg (37:07):

No, awesome. And this kind of gets into this conversation of how we were thinking before in our initial set of questions that we asked about what role higher ed is going to play in a world with ai and is it going to maintain its same place in society or have the same impact or similar impact that it’s currently having? And the answer to this is kind of right to shift its focus to everything that we just talked about, creating an optimal environment for human collaboration, exploring different perspectives and possibilities, building connections both in person and virtually in order to prepare students for a world with this technology in it and with all of its possibility, but also all of its potential downsides. And I guess I’ll even add here, the one major skill that students are going to need or that we’re all going to need in a world with AI is the ability to talk about AI and talk about its impact on society and ask some of the big ethical questions about what the technology poses, right?

(38:43)
We’re currently working on a series of ethics lessons for younger students on things like facial recognition, technology, AI and policing, and the justice system, and even things like AI in healthcare and medical diagnoses and therapy and all of those other things. These are all, or even government use of AI or anything like that. And these are all conversation that today’s students are going to need to be prepared to have and to have opinions about as they go on to become the future. AI developers, policymakers, lawmakers, all of those things that, and because these are by no means settled debates or even ethically black and white questions. And so I think a huge role of education is going to be preparing students for exactly those types of conversations that are going to be pervasive as AI technology continues to permeate every aspect of our society.

Chris Reitz (40:03):

Well said. I was just reflecting on that, your time in, and those of us in society who are privileged to be able to go to college, it’s kind of a laboratory in sandbox, and you’re trying out ideas. And I think that’s critical for just as you were saying, Nina, kind of understanding how to get on in the world, just how to be. And one thing that I think you suggested it to me, you put it in my head, even though you didn’t say it, was kind of advocacy. How do you advocate for yourself? How do you advocate for causes that you care about? It’d be a little scary just to turn someone loose on say, Twitter or X or whatever, or even in the town square, which is increasingly online, but to represent an opinion when they don’t have skills for listening and things like that. So I think this experience of coming together and connecting is just really critical for helping people hone that skill and calibrate so that their opinions get heard and engaged. What do others think? Any other reflections?

Speaker 3 (41:27):

Yeah, I think to what you’re talking about, about having discussions, I think higher education institutions can bring people together who wouldn’t ordinarily be together. So it can force you to hear different opinions and be able to learn and respect those and dialogue, which you don’t have to do in your normal life based on maybe where you live or where you work. Some areas are going to be more homogeneous than others. So I think being able to, even learning how to talk about ai, like you were mentioning, having discussions about the pros and cons. In a lot of cases, a higher education environment, you’re going to hear more perspectives than you might if you just asked your three friends, right? There are thoughts on it. So I think it’s in those discussions, but it’s also that dialogue of disagreement that I think is a very core skill. And you might not also get that if you’re just chatting with ai, right? Because asking you a prompt and it doesn’t have feelings. So I think that leveraging the technology sifting through, but hearing people who think differently, I think that’s something higher ed can definitely do that has a great impact.

Chris Reitz (42:42):

That’s awesome. Thank you.

Nina Bamberg (42:48):

Yeah. Actually, if anyone’s interested, our last free webinar that we did a collaboration event with an organization that we work with called Thinker Analytics, which is run out of the Harvard Philosophy Department. So if you go back on our website, you’ll see a webinar entitled Empathy in the Age of AI or The Importance of Teaching Empathy in the Age of AI. And a lot of their work at Thinker Analytics is about making arguments and having productive disagreements. And so I would highly recommend going back and either checking out their work or watching our previous webinar with them, because we talked about a lot of where this conversation just went in terms of having productive disagreements specifically about ethical issues around ai.

(43:39)
Alright, we’re getting close to time. So I just wanted to go over this last couple things and if we don’t get to everything or if anyone has to jump off early, these slides will be posted along with this recording so you’ll be able to come back and take a look. But just quickly wanted to go touch on something that we see as important, the importance of liberal arts education in the age of ai. And I just wanted to bring this up because I think a lot of times when we talk about advancing technology, it often, and especially how it impacts education, the thought immediately goes to how do we respond in the STEM fields or how do we respond? Is it pushing students towards more science careers or math careers or studying those kinds of skills? And this assumption that the softer skills that might be learned in a liberal arts education are less relevant or less important.

(44:43)
And we really want to emphasize that we absolutely don’t think that that’s true and that these subjects absolutely have a really important place in the age of ai. And shouldn’t we want to push back on this assumption that these degrees will become less and less important in an age of AI for all the reasons that we’ve already been bringing up. Because the liberal arts classes are the ones that often foster critical thinking, creativity, a passion for lifelong learning, exposure to diverse disciplines and critical inquiry. So that really prepares students for all these skills that we’ve been discussing, the need to be adaptable as technology continues to grow, the ability to understand things from multiple perspectives or be able to critically analyze what’s going on in the world. And also this idea that there may have come a time that we distance ourselves from an overly career centric model of education and towards a more human-centered focus. And in doing that, we need the strategies to foster a culture of innovation, build up these reasoning skills that we were just talking about, teach students how to engage in society is just going to be critical given ai. And so just want to say that we absolutely don’t think that these degrees should be undervalued or these courses should be undervalued because of AI and exactly the opposite.

(46:47)
Okay. So also just quickly, this part may be a little bit more practical, but talking about how AI can actually facilitate some of this making learning more human that we’ve been talking about or this potential of AI and what it can do. And some of those things include ways that it can help facilitate collaboration, whether that’s making group projects easier, helping synthesize information or identify points where students need to discuss further. It’s also a great tool for strengthening reasoning and debate skills. It can help students explore diverse perspectives and encourage deeper thinking. And hopefully the real hope is that this technology becomes an important tool for, to help humans solve some of the world’s most pressing issues. I don’t know, Chris, is there anything you wanted to add on this point before we wrap up with the issues?

Chris Reitz (47:55):

Just thinking about how just the power to connect, which is uniquely, maybe it’s not uniquely because networks, the concept of a network is a connection type. It’s full of connections, but just that interpersonal connection and how important that is and thinking about how a student shows up with their optimism and hopes and interests and then is given a platform to, through liberal arts and just at higher ed in general to discover and be supported and encouraged. And then I think technology and technology in general has this ability to connect people with more resources and more people. What I mean by that is, or more specifically, if a student has a really unique interest and they’re not being served by their university, maybe traditionally they’d have to transfer or something and then that’s costly and that’s a big decision. But why couldn’t they collaborate? They could probably get credit where they are by collaborating across the country or across the world and finding that if they didn’t land where they needed to be, but they sort of discovered an interest that’s maybe unique or something like that, then they can possibly connect with a professor elsewhere, something like that, and maybe even contribute to that field.

(49:24)
That person might be a scientist that later cures some disease or something like that just because they got that they were fostered in just the right way early. I think that’s really powerful. What a waste if that weren’t happening.

Nina Bamberg (49:39):

That’s a great point. Okay, so this last part that we wanted to touch on are just some challenges that we potentially see as colleges and universities start to incorporate AI on campuses. And I know Chris, these are a lot of points that you brought up, but we can touch on ’em together. But one is that higher education is often very decentralized, so the policies or guidance or things like that can often happen class to class by individual professors or whatever that is. And that university presidents or other administrators can have potentially limited power and therefore change at the university level can often be slow as it takes time for buy-in to grow. But I don’t know, Chris, do you want to touch on the other potential side of this or how AI could potentially act as a centralizing force?

Chris Reitz (50:53):

Yeah, I think we’re already seeing this that some of the thinking, we will describe it as a data moat. Good luck being in online retail. If you’re going to go up against Amazon, they have so much data, they kind of understand people’s preferences. So it’d be really hard to catch up. And then AI just accelerates that, concentrates more data among those kind of traditional, those incumbents. And I will say though that, and then there’s of course AI has also the ability to democratize access, but that’s more of a policy decision, helping it reach more people more broadly in fair ways. I do have some thoughts. I’ll park it. I know we’re right at time, but blockchain and cryptocurrency are big decentralizing forces, so that’s interesting. Maybe that’s a future webinar. We just kind of explore that. I don’t have the answers, but one that came to me this morning was how could AI help evolve and optimize how assessments are done for admissions?

(52:08)
And one challenge there would be, if past admissions decisions were biased or in any way discriminatory, then that data can’t be used. It would just propagate the bias. So that’s challenging. But AI is objective and could it form a more holistic profile or understanding of an individual applicant and then connect them with the opportunities where their likelihood for success and impact is greatest. Maybe even recommending a major or something like that. Hey, you wrote this essay, you’re passionate about this. We think this college is good for you. We’ve gone ahead and gotten into review your profile. Here are their top majors and you’d be great for this. Just an idea. But that could be where we’re headed.

Nina Bamberg (53:00):

And then the last issue we wanted to bring up was just cost both from the perspective of the cost that it might take cost on the university to integrate new technologies to train, things like that, but also write potential new cost benefit analysis on the part of students. Are they going to rethink the need for a expensive college degree if the benefit isn’t clear? And also on the cost side, what that means in terms of how implementation might vary across traditional elite schools versus other lesser resourced universities and what that might look like. But however, here, technology obviously does have the ability to often do things at a larger scale for a lower cost. And so there are some also potential lowering cost benefits of AI as well if used for that. But I know we’re over time, I don’t want to keep people too long. But if there are any final thoughts or questions, I’m happy to stick around for a couple minutes. But if folks have to go, I understand and thank you so much for coming and joining this conversation.

Chris Reitz (54:46):

Likewise, I’ll echo that. Just appreciate you all coming and engaging and sharing your passion, and I think there’s a ton of passion around this in this group. So just awesome to have a place to kind of debate and inspect these things. Really exciting.

Nina Bamberg (55:05):

And like I said, this recording will be posted, so feel free to share with colleagues or anyone else that you think might be interested.

Chris Reitz (55:16):

Nina, thanks for putting this together. Thanks again to each of you and hopefully more to come.