
The Higher Ed Podcast
The place for authors, professors, and curious listeners to get practical writing advice, behind-the-scenes publishing content, and firsthand experiences shared by textbook authors.
The Higher Ed Podcast
AI in the Classroom with Kris Hans
It's one thing to hear about AI from tech gurus, but they don't always share an educator's perspective. If you're wondering how AI will REALLY impact higher education, special guest and Kendall Hunt author, Kris Hans, is sharing his years of experience successfully merging AI and college courses. Join us to explore the practical benefits for educators, from streamlining course prep to powering personalized learning. We'll also unpack the ethical considerations around academic integrity and responsible AI use. Get professor insight on emerging trends, cross-disciplinary impacts, and preparing students for an AI-integrated future!
Connect with Kris Hans:
https://www.linkedin.com/in/krishans/
khans@mtroyal.ca
No time to listen? Read the blog post! (5 min. read)
Be sure to follow us below!
Linkedin
https://www.linkedin.com/company/kendall-hunt-publishing-company
Facebook
https://www.facebook.com/KendallHuntHE/
Interested in publishing with us?
Contact Jen Lewis at jrlewis@kendallhunt.com
Jen Lewis 0:06
Alright, welcome on in to the Higher Education Podcast. I am your host, Jen Lewis, Director of Marketing. I have the most interesting topic today - a very hot topic in higher education. We are talking about AI in the classroom, so we decided to pull in an expert. I have with me today Chris Hans. I'm going to let you introduce yourself, Chris. Why don't you tell the audience who you are, what makes you an expert in AI, and maybe some other things that you've got going on?
Kris Hans 0:38
Alright, now thank you for having me, Jen. And yeah, so for myself, I've been teaching since 2005. I teach a wide range of courses. It’s quite a mouthful of all the different courses. But I first started off teaching entrepreneurship and marketing, so that was in 2005. If you go to my website, it's Kris with a K: Kris, last name is Hans.ca. You'll see all the courses. Currently, I teach business communications and creativity. Those are my main courses, and so that's where I've been using, or letting my students, use generative AI writing tools in the classroom. And just to give you a little bit more background, back in 1999, I did my first tech startup. I learned how to code and you know, so I learned JavaScript. My sequel for databases, HTML, CSS, all that kind of stuff, and went on to go and do my first text startup. So I've always had an interest in the technology and so this is just - it's kind of refreshed…the ability to go and execute, especially in higher education, and we'll talk more about that. But yeah, I've been in the sphere of technology and innovation for many years.
Jen Lewis 2:01
You know, I have to say, coding is one thing - I am a creative person as well. Marketing, design, all those things. But coding is one thing that I've just never, ever wanted to learn, nor do I ever want to be a part of.
Kris Hans 2:15
Well, now you don't need to because you can use the generative AI tools.
Jen Lewis 2:17
I know.
Kris Hans 2:19
But if you talk to programmers, they'll tell you it's generating a bunch of crap. But the nice thing is with the increased bandwidth for internet and then also just the processing power that we have with the devices, I don't know if it really makes too much of a difference if the code is bloated, as long as it works and it is pretty for people to interact with from a design perspective.
Jen Lewis 2:47
Sure. Well, the point of bringing you on today is because I think there's been a lot of conversation and a lot of rumblings with AI, especially in the higher education field. So why don't we just kick things off? And I'm just going to take this question up for you. How do you see AI transforming traditional teaching methods in higher education?
Kris Hans 3:07
Well, it's it has been a big kind of transition. Especially, you know, they say it was Benjamin Franklin that said that there's two certainties in life: death and taxes. Well, the third is change, and if you don't change, you're basically going to kind of wither away. The pandemic accelerated a lot of the technology, educational technology and innovation. And so now with the generative AI tools that are available, a lot of it was kind of like fear mongering in terms of plagiarism, and students cheating, and so on and so forth. But certainly I look at it - it's a tool just like any other tool. I'm sure the same type of conversations happened when calculators came forward or Excel spreadsheets for accounting or finance. So it's just a tool and it's a matter of how you're going to go and implement it in the higher education environment. I think from an educator’s perspective, it can give you - I say it's almost the equivalent of, you know, Iron Man suit that you have with Tony Stark. It could give you superpowers in terms of tests. For instance, you could generate multiple tests and do a lot of preparation or exercises in a relatively short period of time. And again, because you're the subject matter expert, it can help with the preparation. So I think that it's one thing that I'm definitely I'm giving my students a lot more access to: exercises that they can go and complete on their own and use the generative AI tools.
Jen Lewis 4:47
Yeah, that's great. Let's back up for a second, because obviously, when you mentioned the word plagiarism. I mean, I work for a publisher, you work in education…kind of the same fears here, with copywriting, plagiarism, et cetera. So can you maybe explain, as an educator in the classroom, how do you differentiate?
How do you guys know if people are going to be using AI in terms of writing papers or providing content to you in the classroom?
Kris Hans 5:16
Well, I mean, there's been now multiple peer reviewed studies on this. You cannot determine whether it's content that has been AI generated or not. There's no way to kind of distinguish it. I mean, I've come across colleagues of mine that say that we can tell. I can tell there are certain words, but you know again, at the end of it, if some student is going and taking the content in my course anyways, for business communication, to me, it doesn't really make much of a difference if they're using it or not. It's a matter of the application of the tool. It's that old adage: garbage in, garbage out. If you're going to take some crap and submit it, you're going to be held responsible for that. And you know, there are certain things that I'm looking for. In the AI isn't there for certain things from a concept standpoint. I imagine maybe in creative writing, but even on that side, because people don't understand these large language models, there's billions of documents that have been put together. There are, you know, intellectual property issues. For instance, Stephen King's novels, without his permission, are included in Open AI, ChatGPT, large language model. There's a number of authors, and so now, given that is part of the large language model, the LM, you could tap into that and be able to use that kind of writing style. And so again, I think it's just a starting point. It could give you ideas. I certainly wouldn't hand in something that is AI generated, but it could be a starting point. You could massage it, make it your own, you know, go through some - it's basically a different kind of skill set, I would say, in terms of going and being able to edit it and make it your own again.
Jen Lewis 7:09
Sure. And I think you made a really interesting point earlier when you talked about how you know this is kind of similar to when calculators were introduced and that this is just merely a tool. So in terms of, you know, being a professor, being an educator, I think it could often be overwhelming for educators to understand how they can incorporate this into their classroom. Do you have any specific AI tools or applications that you would tell your colleagues to use in their curriculum?
Kris Hans 7:37
Yeah. So I mean in terms of the tools, I probably the most infamous one that everybody has been talking about for the last couple of years is Open AI's ChatGPT. And this year, it's only been about handful of months, but they've actually made their premium version, their latest LLM, the ChatGPT 4.0 available for free. Microsoft, which controls 49% of Open AI, has their Copilot tool, so if you go to bing.com or copilot.bing.com, you'll be able to access that. So there's another one Google has come out with: their Gemini tool. Apple is coming. They partnered up with open AI. They're going to be coming up with their Apple Intelligence. They just released the in the 8.1 beta for the iOS, so there's - these are just some of the AI writing tools from an image generation. So again, it depends on what you're looking at from your course content and so on. But I think probably the things that people are really looking at is from a writing perspective, but those, you know, like the multi model that Open AI has created, and copilot is basically running off of that same engine, and Gemini has to kind of followed suit with the Google's product. You can also generate images. You can go and create tables. There's all sorts of things that you can do, and then there's beyond that, a lot of the tools out there, they are starting to go with probably the most likely using, you know, ChatGPT as their engine. So that - for instance, Canva has AI abilities. Grammarly, which was known for its grammar editing suggestions, now that has more than just grammar. Now, it actually tells you complete rephrasing of how you could go and put together your sentences and paragraphs. So I think, from a writing perspective, those are probably the tools that I would look through. There's thousands of AI writers out there. These are probably the most well-known and accessible, and again they're free. So that's the nice thing, just from an accessibility and equity standpoint.
Jen Lewis 9:57
Yeah. And you know, I think there is sometimes a lot of a lot of misconceptions around what AI is. And I get asked a lot of questions, in the higher education realm, of how educators can use this in their classroom, in terms of grading and the day-to-day tasking. I’m someone who doesn't work in education. Is there anything out there that professors can use in the classroom to help their day-to-day tasks?
Kris Hans 10:23
Yeah. And that's where I kind of - Jen or earlier I mentioned - I think from doing some of the course preparation, if you're, let's say, doing case studies or writing some exam questions, exercises, even lesson plans. I mean, there's a bunch of things that it could definitely help you out with. You know, it's just a matter of how you go and prompt the AI model. If you know what you're looking for - and especially now these multi models where you can upload and actually provide examples - it does a pretty decent job getting close. It's not 100%. And then you just have to go and work with it, iterate on it, and get it to a point where it can help you, and then work on it on your own. For instance, this past year, we've created a bunch of case studies for our courses for the students to work on for their group project. And so one of the ways that we normally - it would take us probably, I don't know, at least a few days, maybe even a week, to go and work on it. Not every single day. But now what we can do is take a bunch of articles, put it together, throw it into, you know, an AI writer. And it can generate something in seconds and then we can iterate and massage it and make it our own.
Jen Lewis 11:49
Yeah. And I think it's important to note that because I think, you know, like you mentioned earlier with fear mongering and everything we see in the media, I think I think people have to understand there's always going to be a human component to using AI, right? It's not just a click and everything's done and ready for you. In speaking to that, do you think there are any challenges that you've encountered with using AI just in your day-to-day or in the classroom in general?
Kris Hans 12:14
Yeah, it's interesting. So like, in terms of my journey, I was probably one of the first in higher education for sure. I'm Mount Royal University, and you know, maybe even just in general in higher education. But I allowed my students to use AI last summer. Last spring, summer semesters, and so I got the go ahead, and that was in my business communication course. Prior to that, I taught computer mediated communication for the public relations students where we looked at the technology and how that's going to impact students in the profession. And so I've been playing with this technology since fall of 2022. That's because I had to prepare for that public relations course. And so one thing, once I got the approval from my department here and my course coordinator to go and experiment in the spring and summer, it was interesting. None of my students used AI, even though I allowed it. I gave specific instructions for the assessments I gave them. I actually created AI policy as well on how they could use the generative AI writing tools, and none of them used it. A lot of them, they cited. It was interesting. It was good, you know that they wanted to learn for their themselves. They wanna have that ability. How about if the system goes down? So these were good reasons in my mind, but I again ,one of the reasons why I've pushed forward, you know, it's a lot of people have this other fear beyond even just students cheating, which students have always been cheating. And now this is just another tool, right?
Jen Lewis 13:49
Right.
Kris Hans 13:49
But you know that's another matter. But what I'm looking at, especially in one of our duties and responsibilities as educators, is to get the students to complete their degree and then they can get gainful employment. And now if they're in the workplace and a lot of people, the fear is that AI is going to replace your job. It isn't that AI is gonna replace your job. It's other people, the human beings, that are using AI that are basically going to outperform you if you don't know how to use that tool. And so I wanted the students to be able to go and see how the tools they can work for you use them in an ethical and responsible way.
Jen Lewis 14:20
For sure.
Kris Hans 14:29
And so I have received uptake over the last year, but it took a while. It's almost the kind of analogy I've been giving people, that's to imagine you as a parent. You tell your kid, “Don't do something.” They wanted to do it, you know, ten times more just because of that. And as soon as I took that off the table, it’s like, “Ohh, he's letting me use it. So now it's not cool anymore.” That was kind of the sense that I was getting.
Jen Lewis 14:53
Right.
Kris Hans 14:55
And then I think the other thing that I realized, again, a lot of people, they have these preconceived notions, but just because somebody is young, which generally if you, I mean, there are mature students as well, but young people, they're supposed to be tech savvy because they were born into that environment where, you know, smartphones or tablets and all these kind of devices. But you know, don't kid yourselves.
Jen Lewis 15:18
Yeah.
Kris Hans 15:20
They're not that tech savvy. You have to have a genuine interest. And so one of my requirements that I would have is especially for the test that I would have the students do, is that OK, you can use either. You can use your brain. So it's like, you know, choose your own adventure. Use your brain or use your brain augmented with AI. And if you're going to use AI, then you have to. It's similar to like with math, you could just put in the answer, but in a math question, if you just put the answer then you got it wrong. You would get a zero. And so what I wanted them to do is just like with math, where you show the step-by-step, that “This is how I use the calculations, and this is how I arrived at it.” You could get part marks. So what I requested of my students is to go and provide me with screenshots of the interaction with the AI writer, and what I discovered is many students do not know how to go and generate screen shots, and so now I've had to go and show them that.
Jen Lewis 16:13
Oh my.
Kris Hans 16:21
And then beyond that, what I suggested to them, is maybe just go and, if especially in that time constrained environment of the test, maybe use Copilot. Because Copilot, if you're logged in, you do have the ability to export out your full interaction with the AI writer. And so, I mean, there's limitations. You can only go back and forth in one conversation with prompts 30 times, but I don't know if they need to go beyond that anyways for that test environment.
Jen Lewis 16:50
Yeah, I think it's really important what you said about kind of teaching them how to use it now, because I can confirm that in my professional career, I use AI as a marketing director. I use AI in my daily life. You know, I think sometimes, it's nice not to start from a blank canvas when you're trying to figure out some copy to write. It's nice to use it in that regard, but you know, when you're talking about Copilot and using it for math or things of that nature.
Kris Hans 17:11
Yeah.
Jen Lewis 17:16
Do you think it's creating a more personalized learning experience for your students?
Kris Hans 17:21
Well, it certainly can. And you know now, especially with open AI, how they've opened it up so that you don't have to have the subscription, I mean, the ability to go and create your own GPTs, I haven't done this yet, but I'm thinking for the upcoming year, we probably should do this where we create custom GPTS with all the instructions for various assignments, upload all those, any kind of questions that people might have. And so then the student can go and use that GPT to go and get, you know, custom responses back. I mean, obviously, again, they should double check with me or their instructor, but at least it's something that like, how you're saying, is like a blank canvas. Now they could, you know? It's almost like a tutor on the side.
Jen Lewis 18:10
Mm-hmm.
Kris Hans 18:10
Right. And so you know, you're getting some assistance that way. So again, I think the other thing too, what a lot of people don't realize, I look at it with like ChatGPT.
I've had the subscription version for - since it was that ChatGPT plus and they rolled out new features. So one of the features is in the settings where there's those three dots. You can go and put in some customization and in there I've put in all the information that all the various courses, my professional background - it's starting to sound more like me. And it's not 100%, but the you know, by going and giving - because think about - there's, like, billions of documents there, and now you've given it some parameters and people don't realize, I mean beyond the technology itself, it's nothing new.
Jen Lewis 18:45
Yeah.
Kris Hans 19:02
It's just the way that it's doing it. and the other analogy that I've been using is like, you know, it's like taking a photocopy. And then generating another photocopy and another photocopy, and so you know a certain point. If it's the same thing, if you took like a picture of a picture and a picture, it's gonna get diluted. And so it's not true intelligence, what it is.
Jen Lewis 19:22
Mm-hmm.
Kris Hans 19:26
It's taking our - whatever instructions that we're putting in there - and trying to figure out a pattern, and generating text based off of that. It might sound good.
It might sound really intelligent, but again, that's where that subject matter expertise will come in. and you can gauge whether this is actually correct or not. If it isn't, then you can have it edit the output and then go from there.
Jen Lewis 19:52
Yeah. And I know earlier we touched on kind of plagiarism copyright a little bit, because the one question I think a lot of the people listening to this podcast are gonna wanna know, is what are the ethical considerations that educators should be aware of? And do you think there are any issues with it or is this just another tool, kind of like a Google?
Kris Hans 20:13
Yeah. Well, I mean, there's a plethora of ethical issues. I mean, we've talked about there is the potential of the loss of jobs. I don't see it like the previous industrial revolution where we introduced machinery.
Jen Lewis 20:30
Mm-hmm.
Kris Hans 20:30
I don't know what the new jobs are going to get created out of this. I see a lot of companies - I mean, there's been huge tech companies that have laid off people because of this. So there's certainly that issue. The actual large language models - some people say that it's been developed by slave labor, so it's basically people who are in Kenya or other places that are getting paid $2.00 an hour to go and go sift through the actual information that's going into the large language model for bias and so on. And now if you think about it, I mean, people don't realize, OK, well now ChatGPT has made their latest model free. Well, so now we're getting paid nothing. We're basically getting paid nothing and we're helping train this large language model, and it's the same kind of MO that tech companies have had in the past where the actual, you know, helping develop, whether there's their searches or their social media, and they're targeting us. And so where are the actual products that they're using? I think also you know it was interesting, like, there's one of the most prolific people in in higher education that you can. I don't know how he's a machine, but Ethan Mollick. He's at the Warren School of Business in the US and every day he's posting stuff.
Jen Lewis 21:54
Mm-hmm.
Kris Hans 21:55
He teaches entrepreneurship. And so, you know, he's probably the one of the biggest advocates for it. And he's made it mandatory. So if you take his course, you have to use AI. And so I was thinking of doing that, but then don't think that when I went about this last year and introduced now in experimented in spring and summer, I talked to a whole bunch of people, some people thought I was crazy. I talked to former students. I talked to colleagues of mine and yeah, they raised their kind of concerns, and one of those concerns was, “OK, well, if you make it mandatory, are people gonna be OK with that? Do you need to make it mandatory?” And so this was some of my colleagues and students and so on. Former students, people who are in the workplace, they brought that to my attention. I'm like, hey, you know, maybe it's a choice for them. I mean, I think that they should still explore it, and we have certain exercises that are low stakes, just participation type of exercises, so that they can get acquainted with the technology. But yeah, there's certainly implications there. There's also one of the things that, in my business communication course that I teach, I have the students give pure feedback. And so I told the students not to go and provide feedback by taking a another classmate’s information, what they've submitted, and throwing it into an AI writer because they didn't give you permission to do that. And so now for that critique, you have to use your own brain.
Jen Lewis 23:29
Mm-hmm.
Kris Hans 23:33
You can go and generate the actual submission using AI, and then you should follow my AI policy to do that. But when you're doing that critique, you should do it on your own, using your own brain, and not taking somebody's writing and submitting it. And this is where, if you look at even some of the biggest companies, have banned the technology altogether. And a lot of it comes down to ethical and legal consideration. So last year, you might recall Samsung employees, they took a bunch of their, you know, secret sauce, proprietary information, and threw it into ChatGPT. So now, conceivably Open AI and you know, because of Microsoft's half interest in the company, they have all of their, you know, upcoming technology - access to it. So I mean, those employees probably should have gotten fired and should have been smarter about that.
Jen Lewis 24:22
Mm-hmm.
Kris Hans 24:28
But now, like Apple does not require - is not allowing their students to, or not their employees. There's a whole bunch of companies that have just banned it altogether. I don't know if that's going to stop anybody, because what's going to happen is people are gonna use their personal devices, and especially with these multi models that you have, you could take pictures, upload that, it'll take the text from it. So there's a whole bunch of like issues there, and again I think even beyond that, if you look at it, there's certain people that have taken the lazy way out this past year. There's been people in the legal profession, for instance, that have taken a bunch of their client information, threw it into something like ChatGPT, it generates their briefs, they file it with the court, and then when they go forward in front of a judge or what have you, they find out that it fabricated case law. And so now those people are probably going to get disbarred. And then beyond that, probably sued for malpractice. And so there's definitely issues that people should consider. This is new technology, and you know, I tell people that you should always be careful what you put into the system. Because who knows what the long term implications are if it's already been seen Open AI, their Mac version, that you can actually have a desktop client? There was security issues with it where everything that you were putting into it was vulnerable and everybody could access it. They patched that, but I mean, are you going to trust this technology? I don't know. Again, that's where you got to be.
Jen Lewis 26:07
Mm-hmm.
Kris Hans 26:08
Got to think twice what you're putting into this system.
Jen Lewis 26:12
Well, I think that that's a good point, because I mean, this is more of a personal question, just in my wondering than you know, asking on behalf of professors.
But is there a difference between dumping information into just a free ChatGPT account, or is there any protection behind a money wall?
Kris Hans 26:30
I don't know if I - does anybody ever read those terms and conditions? I mean, yeah.
Jen Lewis 26:35
I was kind of hoping you did, so I didn't have to.
Kris Hans 26:37
Well, no, I mean nobody reads that stuff. There's been studies on that. You know, it would take you literally years to go and read through all those terms and conditions and they put all sorts of stuff in there. And so what I would suggest, I mean technically speaking, and say it's not the bottom. Whenever you're entering in any information, even if you delete something, you are helping them train the data, right? So you're helping them go and take, even if they say that they're going to delete everything after 30 days, they're using it for God knows what. I wouldn't trust it. Just because they - and you've seen this with numerous companies. I mean, look at probably the biggest perpetrator in the tech sector is Facebook. I would say Google is probably second, but you know they treat their kind of approach is basically ask for forgiveness instead of permission, and whenever they run into any kind of issues they get fined, they pay that fine. The market rewards them because if they actually had to go through the process, they wouldn't be able to generate the billions of dollars that they are. And so yeah, I mean I would be very cautious, in fact, what I would even suggest to any of you, I would even create like a - you know a burner email, like burner accounts. If anything ever goes awry, I would just delete the account altogether because I don't know if I would trust any company out there.
Jen Lewis 28:06
So anyone listening to not trust a paywall?
Kris Hans 28:09
I mean, I don't trust anything and I'd be - I would be very careful. What you enter into these systems? I mean it's one thing. I mean, think about Jen. Like, I've brought up any time I've done talks on this. Especially in the office environment, who likes taking meeting minutes? Nobody does. And you know, you probably have been in these meetings where you have a meeting just for the sake of meeting, because it's scheduled, and so on. And then nobody remembers the action items or what have you. But now this AI - this is a great application. You can go and have everything dictated. It can go and generate the minutes. It can distribute, you know, the minutes to everybody's trade an agenda for the next meeting, you know, put in place any kind of action items. I mean, how much time does that save you to?
Jen Lewis 28:59
Ohhh my gosh, so much. That's actually really funny, because a couple of podcast episodes ago, I had on a colleague of mine who was talking about being on campus. Like, why we do campus visits as a publisher, and he was talking about one of his first experiences on campus, and he needed a raincoat. And he didn't have one. And so one of the action items that it generated for me was to buy them raincoats for when they when they go visit college campuses. I thought that was so funny, but anyway. So, you know, we've talked about how to use it in the classroom, a little bit about students? I guess because you've saturated yourself in this more so than other people, what advice would you give to professors who are hesitant, or maybe skeptical, about using AI in their classroom?
Kris Hans 29:48
Well, so you know, and this is where again, I'd lean on Ethan Mollick. So he says that for any tool you need to go and spend at least 10 hours. And I've come across so many people they haven't even used the technology. They haven't even tried, and so I would encourage everybody to at least go and try it out and give it a chance. If it is, you know like a 10 hour chance, and see what you can do, what it comes up with. You know, again, AI is not a one-size-fits-all type of solution. It can do some things really well, others not. It might suit some purposes, audiences, contexts, but not others. So you really - you got to be selective and smart when you're using the AI, and this is the what I'm instilling in my students as well. And you know, you need to think about whether the AI is the right tool for the job and use your own judgment and common sense on whether you want to go and use AI or not. But you know, like it or not, the students are using it. I mean, I remember when I first came up in my class. For this business communication course we have now, this upcoming year, is going to be 11 sections. It's a mandatory course for four of our degrees, and so, you know, it came up in class. Some students asked me, “Well, why do I need to go and even learn how to write, because the AI can write for me.” I'm like, OK, let me show you. And so I walked them through some of our exercises and it generated some pretty brutal results. And we could go and massage it a bit and make it better, but if you were gonna take the lazy approach, I showed them how you probably would only maybe get like a satisfactory - and again, the AI, it'll probably get better. I mean, also keep in mind, it's kind of like shocking to think about it this way, but this is probably the worst that the AI will ever be.
Jen Lewis 31:49
Mm-hmm.
Kris Hans 31:49
At least this generative AI should get progressively better. So again, I would encourage, you know, educators, professors, to go and try it out. See what where it makes sense and where you know the you could actually - I mean, you're gonna have to think about your pedagogical approach to the course and how it might benefit the students, and maybe even think about how, even if it isn't something that you're going to use in the classroom, how it might be able to help you execute. And, you know, look at where we're doing a lot of stuff. I mean, people don't realize there's so much work that goes into teaching from the preparation, lecturing, grading, and so on. And so, I mean now even, you know, creating rubrics, creating questions like it, it can exponentially help. Go and develop some of that material. Maybe it'll be good. Maybe it'll be bad, but certainly it's better than starting from zero.
Jen Lewis 32:48
Right. Yeah. And I always say that to my team members, is like if you can just adopt the mindset of thinking of AI as kind of like your own little virtual assistant. You ask it questions, and it's not like you have to use it where you're dumping content into it and it could be could be a whoops later on down the road.
Kris Hans 32:59
Yeah.
Jen Lewis 33:07
Just use it as a virtual assistant, and see what happens. But I think a burning question on everyone's mind is, and this is totally your opinion, is do you think educators will be taken over by AI someday? Do you see in our lifetime? And AI university? I know it's a silly question, but your professional opinion, do you see that happening in our lifetime?
Kris Hans 33:30
Yeah, I don't think so. Just think about it just from a sociological perspective. I mean, technically speaking, you don't even need to go to university. I mean, some of the most successful people do not have degrees. Zuckerberg dropped out of Harvard. You have Bill Gates as one of the richest people. And so on and so forth. But let's face it, you're not Zuckerberg or Bill Gates, and your parents want you to have a good chance in your career. And so whether people like it or not - I mean, even I was having this conversation with my wife. I don't know how much like our kids are gonna actually learn, but are we gonna go. in short, shortchange them on that opportunity and that experience of going to university? No, I mean, we're going to try our best for them.
Jen Lewis 34:19
Mm-hmm.
Kris Hans 34:21
And so with that, I think again, you know certainly maybe it'll help us like this technology will make it a little bit easier or more productive and where we can actually go and do more. But like I look at - and this is where, like Ethan Mollick and entrepreneurship - I mean, I've taught entrepreneurship. So this is where it's a really good application. I mean, before what would happen is you would come up with a business idea in the course. It would be a crappy idea. Most likely, the students who would come up with something crappy, they work on it over the semester and they would do a crappy presentation at the end. Maybe it's an OK presentation, but like I mean, how much time are they actually spending on this? Now with Ethan Mollick when he makes a mandatory, it's like, OK, yeah, you're coming up with an idea and not only do they come up with an idea, but they come up with a brand for it. They come up with, you know, maybe a website copy for that website images if it's a physical product, maybe they actually have renderings of it and so on and so forth. So by the end of that course, you could have something that's fully fleshed out and you could in fact launch it. So now what's happening is that people are going to, if you look at it that way, what we should be doing is demanding more from our students for those end deliverables. Where it would have just been a crappy idea with a crappy presentation, and now it could be a really polished, and who knows what they might even be able to create. And so again, yeah, I think you know this whole idea, at least in for the next like 5-10 years, I don't think much is going to change other than the way that we go and integrate it and use it in the classroom environment. So again, that's something that everybody, every course you're going to have to kind of think about and what the best way to go and you know utilize the technology would be.
Jen Lewis 36:15
Yeah, I think earlier you mentioned that it's not a one-size-fits-all approach. So I specifically would like to know your opinion on this, is do you think AI is better suited for certain disciplines in higher education, or do you think it just is similar across the board?
Kris Hans 36:33
Well, you know it's hard. I can't say, you know definitively, for I can tell you in business, I think it's really well suited, especially business communication.
Jen Lewis 36:42
Mm-hmm.
Kris Hans 36:44
But I look at it in entrepreneurship. I think marketing, I mean, there's so many disciplines that you could, I mean, even HR, you could go and put together job descriptions, interview questions.
Jen Lewis 36:54
Oh yeah.
Kris Hans 36:56
I mean, assessing issues that could happen in the workplace. So again, from a productivity workplace business kind of application, I think it's really well suited. But you know, if I look at, I remember I - I don't even know who this person was. But the first time I did a talk on this was last year at the Teaching and Learning Conference that we had at Mount Royal, where it was my use of AI in the classroom. And there was somebody from the sciences. And the pushback was all the students aren't going to learn anything. And no, we can't do this and I don't know. I mean, in science, maybe again, it's your kind of mindset. I mean now you could use it as you're saying, like an assistant or like maybe a tutor. Maybe there's some concept that you didn't get a full grasp of in the classroom environment that you can maybe get a good idea.
Jen Lewis 37:45
Mm-hmm.
Kris Hans 37:52
“Hey, I need to go and figure out how these neutrons and protons, or what have you, and what would be a good application of that?” And so again, it's just your perception - how you use that technology. So I don't know if there's any kind of discipline that you wouldn't be able to use it in. It's just a matter of the educator’s process math.
Jen Lewis 38:13
Mm-hmm.
Kris Hans 38:13
It does have issues, but it'll likely get better. I mean, I've seen especially you've seen probably those memes where there's you have to apply bad mass, right.
Jen Lewis 38:23
Mm-hmm.
Kris Hans 38:23
And so you have to do the brackets and all that kind of stuff and then if you put it in there, it'll go and answer it one way and then if you tell them, “Oh well, you gotta go in and do, you know, maybe the multiplication before this and blah blah blah.” And then it will be like, “Yeah, you're right,” and it'll change it. And then you go back in there and like, no, actually you were right the first time, and it was just flip it back. So it doesn't know 100%, and this is where, again anything that it generates, I would not trust what the AI generates. I mean, it's not a fact checker. It can make up stuff on the fly. It mixes up dates, numbers, all sorts of things. As I've mentioned, even for the legal kind of side of things, I mean, it just makes up a whole bunch of fake case law, right? And so this is where that subject matter expertise. And so this is again coming back to your point about like with educators and professors, I mean, if somebody has been teaching for X number of years, and there's a profession or a professional in that space, they would be able to gauge. That's why I say to you like, let's say I don't like to code much these days, but I've talked to programmers. Yeah, it generates garbage, but at least there's certain applications. Let's say if you had a bug because literally, if you miss 1 character, it can throw off and break your code, and so it can pick up a lot of those issues.
Jen Lewis 39:40
Mm-hmm.
Kris Hans 39:48
So again, like it's your way of approaching the technology and how you might be able to find it useful. And so imagine trying to find that, you know if it was, let's say a semi colon or something, right, that you had to go and search through that was breaking the code. I mean, how much time does that take? I mean, you've seen just recently how much chaos got, you know, put in place just because of that crowd strike. You know windows them application and then they had to go and reboot. And this is where, again, like I think you still need the human element. It's not like everything's gonna be automated yet.
Jen Lewis 40:26
Yeah. And I think people, I think sometimes the unknown is when people make those fearful blanket statements like, “Well, they're not gonna learn,” or “Are jobs are going to be replaced?” Because I think people just aren't educated enough in the AI realm, which you are. Right. So in speaking to that, do you think there are any emerging trends in AI in the next 5 to 10 years, maybe that you've seen or that you can speculate that we can look forward to?
Kris Hans 40:51
Well, we can already see these large language models. I mean, it's remarkable what's happened in the last two years, but now what is also coming out is small language models. And this is what Apple is working on right now. And so it'll be interesting because what they're promising is to be able to, you know, some of those ethical considerations that I mentioned where if you're concerned about privacy, putting all your information into this thing again, I would probably not put anything sensitive or, you know, confidential into any of these devices or any of these programs so that you know, that just goes. It's even beyond that going into the cloud, you probably have seen like people's pictures being leaked out, or what have you. So you should just be careful. Anything, digitally, putting it on the Internet because it could exist forever. But you know, these localized models, these small language models, this is where I think it could get really interesting. And you know, again I mentioned earlier about the processing power of these devices. I mean, think about your smartphone. I have the latest iPhone. This iPhone is probably, I don't know, maybe 20 times more powerful than computers of the past, and it's in your pocket.
Jen Lewis 42:14
Mm-Hmm.
Kris Hans 42:16
So now imagine if in the way that they have conceived of this partnership with open AI, with the Apple Intelligence, it's going to probably be able to pick up on your writing style and certain things that you feed into it and you don't have to worry about it because it's on your device. Again, I don't know. I mean, I'm not that technical. They say that there's ways and measures that they've put in place, but even look at Microsoft. They've invested $11 billion into open AI, and at this point, what you know, there are also investing into small language models as well, because these large language models that can certainly do a lot of things. But maybe there's more specific, uh, types of functions or tools that we want to go and develop. And so now you don't need the billions of documents. You just need X number of documents for that particular application, and again, this is where I think like you know, going back to what we talked about earlier, simple things, like I look at it, emails, and that's part of what I'm teaching in the business communication course. I've been averaging a couple of 100 emails a day for the last few years. During the pandemic, it was even more so. Not everything I have to deal with, but I certainly do have to respond back now. That would be a really good application, and now we're seeing that it's that predictive text ability from with text messages or in like, you know, Microsoft or Google Office applications, right.
Jen Lewis 43:46
Hm-Hmm.
Kris Hans 43:47
And now it's being applied on the masses. And so we're going to be able to see more of that. But again, you got to be able to see if that makes sense. Whatever is generating because there's a lot of stuff that just, you know, puts together and it may not be very clear, specific, or relevant to what you're trying to communicate.
Jen Lewis 44:08
While that human piece is never going to go away, right, I don't think we're going to be in our lifetime living in a world where robots are telling us what to do. At least, I hope not.
Kris Hans 44:17
Yeah. Well, I mean, we're supposed to be having flying cars. I don't. I don't see them yet either, so…
Jen Lewis 44:23
That's funny. So one of the reasons I brought you on today was because a couple months ago, I heard that you were writing a book with us that involved AI. So I wanted to make sure that I gave you a little bit of space to talk about what your book is about and maybe why you're writing it, and maybe when we can look forward to seeing that come out.
Kris Hans 44:44
Yeah. So the book that I'm writing, the title is Business Communication Decoded, and so I originally I was supposed to write this during the pandemic, but I got super busy.
Jen Lewis 44:50
Mm-hmm.
Kris Hans 44:56
I, unlike many of my colleagues, I actually have online teaching experience, so I had to take courses on both learning and teaching online in order for me to teach online in the first place.
Jen Lewis 45:06
Mm-hmm.
Kris Hans 45:06
So I did that back in 2013. So during the pandemic I just got thrown courses at me because they knew that I had this ability to execute. So it kind of got shelved, and I think it maybe it was a good thing. I just got super busy and so at that point the technology wasn't there. And then I, you know, reengage with Kendall Hunt and we talked about my use of AI in the classroom. And so with business communications, I mean any textbook, if you look at it as soon as you publish it, it becomes outdated. And so my approach is I'm actually not trying to make it traditional textbook. It's almost going to be, what I'm hoping is, even maybe the for the masses where somebody could go and pick it up if you needed some help with, you know, preparing a bad news message or some sort of negative kind of situation where you have to go and draft a message for that. I'm gonna go and provide you some of the models that you can use, so you might have heard of for instance, like there's the bad news sandwich or I call it like the bad news burger, where you bury the bad news in the middle and you know, so there are certain models like this where I'll put that together.
Jen Lewis 46:19
Mm-Hmm.
Kris Hans 46:23
But then beyond that, I'm also providing tips on how you could use an AI writer to go and generate that output. And so this is where people don't have to go and just come up with stuff on the fly.
Jen Lewis 46:33
Yeah.
Kris Hans 46:39
They can go and use it as a resource, and so it'll just be full of hopefully useful application of communication in the environment for the workplace. So I'm covering everything from how you should go and analyze the any type of communication that you're dealing with, audience design, looking at persuasive messaging presentations. So it's going to cover the gamut of everything that you would need and that's what we cover in the the business communication course and that's one of the reasons why it is a mandatory course. I mean, when I did my undergrad, it was mandatory that it is no longer mandatory at some of the institutions, but at Mount Royal University it is, and one of the reasons why we've got 4 degrees looking for that comes back to like how you mentioned that human element. You know, we have a degree in data science. There is a business community, computer information systems and the one of the reasons why they wanted business communications as a mandatory courses. These technical disciplines are gonna have to communicate with people in the office environment and to be effective in that environment that you need to get your, you know, message across. So it's understood people can take away what they need and take action. And so again there. it's very practical. We do go through the theory, but we try to make it as practical as possible. And it's funny because I quite often think the students at the time, they don't realize the value of the course, but I've seen people after they've gone into industry and they're like, “Kris, that situation that we talked about in class or that exercise, I actually had to deal with that type of situation.” And so kind of is rewarding that way.
Jen Lewis 48:28
Mm-hmm.
Kris Hans 48:30
And so again, that's what I'm working on. It should be released in this upcoming year. You know, just trying to get every - things are changing so quickly, but I'm writing it in a way with the even if, let's say, this technology for the AI writing changes, hopefully people will be able to at least get value out of it. I've even come up with models of how, because I don't know how this technology is going to change, but I've come up with some models of how you could even approach developing that prompt.
Jen Lewis 49:12
That's amazing. I'm sure they'll be a lot of great takeaways from your book, and I'm so glad I could facilitate this conversation with you today. I know I learned a lot, and I think anyone listening to this probably feels a little more comfortable and at ease with AI in the classroom or maybe just in their day-to-day. So I can't thank you enough for your time today, and I genuinely appreciate you sharing all of this information with us.
Kris Hans 49:35
No, it's my pleasure.
Jen Lewis 49:37
Thank you so much.