Appendix B
Episode 013 -

AI and the erosion of trust in Higher Ed

Appendix B Episode 13, text is present that reads, "AI and the erosion of trust in Higher Ed"

That AI. So hot right now.
In a world where Higher Ed MarComm professionals are always seeking out more efficient workflows, large language models like ChatGPT have been touted as the modern marketer’s “ultimate weapon.”
However, Higher Ed has a trust problem, and because of that, we need to transparent with our audiences about when and where we’re using AI. Especially in situations where they expect to interact with actual humans.
Subscribe to Bravery’s Newsletter / Follow Joel / Follow Kristin / Check out the Bravery YouTube Channel

Transcript

Carl Gratiot:
From Bravery Media, this is Appendix B. On this week’s episode, AI and the Erosion of Trust in Higher Ed. Here’s Kristin Van Dorn and Joel Goodman.

Joel Goodman:
So AI has been, everywhere, Oh my Lord. Like surfing around LinkedIn, It’s all I see from everyone is, yay, we got a new AI thing that’s writing your content for you or is replacing that chatbot that, remember we told you it was an AI chatbot, but it wasn’t a large language model, AI chatbot?

And, you know, thousands and thousands and thousands of blog posts and newsletters about how amazing AI is. And I’m not skeptical about it. Like I, like there are some really cool things you can do with AI. We’ve used these large language models to do some fun things, even at Bravery, like help us out with, images for our website when we just weren’t gonna try to learn how to do 3D rendering in Blender and create our own, and you know, we’re talking through a lot of interesting ideas internally, but I’m not convinced that the best application of these large language models is just tossing a chatbot on your website and having it regurgitate that, and I don’t think, and I think you agree, Kristin, but I don’t think that AI is here to take all of our jobs, at least not anytime soon.

Like we got a little bit of time.

Kristin Van Dorn:
No, I don’t think so at all.

Joel Goodman:
So where do you stand on how, I guess like, I don’t know, what are your initial impressions of all the large language model AI and I guess like maybe we should separate the two things, right? So like artificial intelligence, what we’re seeing today with the ChatGPTs and stuff, it’s large language models.

It’s a database of a mind-boggling amount of data points around how people write and the order of sentences and things like that. And then what you get back content wise is basically a prediction that the algorithms putting together, it’s predicting what the next word in the sentence is supposed to be or what maybe the next sentence is going to be after the one it just written.

It’s not actually thinking, even though it can kind of feel like that sometimes when you’re having those conversations, this is not the artificial intelligence I think maybe we were promised in, in sci-fi movies, but it is in incredibly impressive. So we’ll just, we’ll separate that. I think most people think AI is just like this cool, big scientific leap forward and it is kind of, but it’s a very specific type of predictive text generation at this point ,or image generation in the case of like Midjourney and DALL-E?

Kristin Van Dorn:
Yeah, so I think that there’s like a couple of misconceptions that are floating around. One of which is that AI is an average of what’s out there in content, and I think an average is a very crude way of understanding it because if you are putting in prompts in an interesting way or cultivating prompts that make sense to the text predictor, you’re going to get better or different than average results, it’s more about how you’re cultivating those prompts right?

Joel Goodman:
And, I love this term, designer Dan Kaar, coined it, I think, but you gotta become a Prompt Daddy to, to really make the AI do what you wanted to do, right?

It’s, it’s a skill in and of itself, which is why we see so many jobs popping up for it. But what that means is that for most people, just starting out with, you know, just starting out with, with an A.I, like, you know, maybe it was built into a CMS you use or built into a CRM you use or some other product.

Or maybe you’re just, maybe you’re just trying out a ChatGPT account to see what it does like you have to get really, really good at asking questions or, or giving instructions. You basically have to become a, it’s like natural language programming is really what it is. You’re still a programmer, you’re just not writing the code directly.

There’s that abstraction layer between, but you have to get really good at being super specific but not too specific. And a little conceptual, but not too conceptual in order to get the AIs to respond with something that’s useful, So in your mind, Kristin, like how do you think that sort of, I guess that liability that it has of not being an average affects the ways that we’re hearing everyone should use AI or everyone’s starting to use A.I, in, in the Higher Ed scene at least?

Kristin Van Dorn:
Yeah, so I think that there’s a big discussion on what these models are good for and what they’re not good at yet, and people are trying to determine how they’re going to augment their workflows with AI, with ChatGPT in a way that makes sense, right, and I’m not sure we’re asking the right questions yet, where it’s less about what this product can do for us and more about how this is going to fit into the environment and into, the minds of our students and our audiences. So at what point can our audiences detect when we’re using a chat generator versus a real person, and how transparent do we wanna be about when this content has been generated by a large language model versus something else?

I think the important thing is to understand when our audiences expect to be talking or conversing or learning from a real human that has tangible experience and when they actually prefer like the technical writing of a text generator that is using predictive software to kind of anticipate their needs.

I think those are different things and we have to get better at having the discussion of what do people want versus where can this fit into my workflow?

Joel Goodman:
And the workflow thing is super interesting. I think there’s, there’s a tendency to want to think that it’s the answer to all, all of our problems, and that if we just apply AI to whatever our workflow issue is, it’s gonna give us more time back, whatever that means. And so we like will use a chat, like a ChatGPT to do, to write our content for us, you know? And leave it at that and, or maybe give it a light edit, but a lot of times, like sure, it might, it might be a decent first draft if you’re good at prompting.

And good at prompt writing, but in most cases it’s not gonna be great, and you gotta still gotta work at it. You still are gonna have to spend like an hour saying like, well modify this sentence, or change this paragraph, or let’s reorder it. Or like, eh, I don’t know. This doesn’t seem quite right to me.

You know? And then even checking and, and verifying, the, data that is in there, And I think part of that danger there besides just, you know, I think the first inclination for skeptical marketers is that, well, all the content’s gonna start sounding the same. We’re just gonna be having chatbots writing the same thing.

And then what happens when chatbots are learning from chatbots and it’s all, it just gets, you know? Sure. I think that actually probably some, some good danger there in terms of just being very generic in the content that comes out. But I think one of the bigger issues that comes up, and we talked a little bit about this, before, before we did this recording, is the trust factor, right?

So, We know from the world, from what’s happening with AI right now that, it often lies. You have to be really good at fact-checking sources or fact-checking the things that it’s writing, we’ve seen, you know, there, there have been a, even in Higher Ed, there have been some like mini scandals about AI generated content over the last several months.

How do you think the trust factor plays into what we do in Higher Education, from a marketing standpoint, but also from an academic standpoint?

Kristin Van Dorn:
Well, so I think of the trust factors cutting two ways. One is that people don’t trust AI models completely yet, which they shouldn’t.

Like, it’s healthy to have some skepticism, but also people don’t trust big institutions right now either, including Higher Ed. So I think trust and generating trust with our audiences has to be our north star. And that comes into play when you’re using a, a product like ChatGPT to make sure that it’s reading like something that our audiences expect from us and that they want from us.

It’s not about getting the facts right, although, of course it has to be accurate. We don’t wanna mislead anyone, but it’s about engendering a feeling like, okay, I believe this institution. And if your model is producing content in such a way where it sounds overly marketed or it’s missing the things that make it sound human and approachable like metaphors and similes, I think that you’re getting to a point where that trust is gonna dissolve.

However, if the AI model is giving you content that is getting people to where they need to be faster on your website, it’s working for them. It produces better UX results, I think, then people are gonna celebrate the changes that you’ve made and it’s gonna have a really positive impact.

I think the tricky part is going to be discerning how this is affecting your trust, and that’s the thing that you wanna focus on.

Carl Gratiot:
Thank you so much for listening to Appendix B. We really have fun with these chats each week, and we hope you enjoy listening to them. And if you do, please consider leaving a review on Apple Podcasts. You can be as honest as you want. We hope it’s good, but if not, well, at least you’re being honest. Right?

If you wanna subscribe to our newsletter and get some Higher Ed Hot Takes, please go to Bravery.fyi. Thank you so much. We’ll see you next week. Bye-bye.