AI and Education
The Dilemma of Learning in the Age of Large Language Models
The Dilemma of Learning in the Age of Large Language Models
"Don't be too proud of this technological terror you've constructed. The ability to destroy a planet is insignificant next to the power of the Force."
--Darth Vader, Star Wars: Episode IV — A New Hope
AI offers incredibly powerful tools for finding, analyzing, and recombining knowledge. It's also slated to replace many of today's jobs. Naturally, many feel it's essential we use AI in schools — they want our students prepared for this brave new world of advanced technology, a world where technology can rapidly expand an idea into a completed masterpiece. Many students and faculty use AI to enhance their productivity by seeking out new ideas and checking their work. But others — like myself — feel that AI creates tremendous problems because it allows individuals to bypass many of the traditional measures of learning, such as doing deep research or composing ideas.
Students are caught in the middle of this debate — and without guidance, many of are using AI in ways that defeat the entire purpose of education. For me, much of learning requires struggle — you have to read difficult texts, grapple with confusing ideas, and eventually come up with ways to express your own thoughts and opinions for others. I feel that AI streamlines the process to a dangerous degree. This streamlining isn't necessarily bad in every situation — of course it makes sense to have an AI double-check your email to a professor to make sure you caught all the spelling and grammar mistakes. But asking the AI to write the email for you? What happens when you meet with your professor face-to-face, and then you don't understand everything in the email that came from your account? Now imagine carrying that to a research paper. Sure, Grammarly can adjust your writing to sound "professional," but it can't change the way you speak during a job interview. And ChatGPT can provide all kinds of ideas to start an outline, but why can't you generate ideas on your own? Is it because you need to read more about your topic? That's called research — that's the whole point of writing a paper. And sure, ChatGPT might generate better ideas on some topics, but who cares? As a professor, I don't grade based on what ChatGPT can or cannot do — I only care about what you as a student can do. And it frustrates me that I sometimes cannot tell how much writing comes from my students and how much comes from a disembodied machine in the cloud.
Before we can understand the challenges that AI poses to education, we need to define the concept of education. What does it mean to be "educated"? Why does it matter? Beyond that lovely embossed certificate they hand you at graduation, what does education actually represent?
Unfortunately, we have to look past some buzz words and jargon. Many describe education as developing "critical thinking" or "critical inquiry." Others see education as a means to learn important "job skills" for the future. Still others believe education should provide a more "global perspective" of the world. And then there some who see education as a waste of time, as "book learning" instead of "practical" knowledge.
So first, let's understand that education takes many forms. This website primarily focused on education in college, but there are some key aspects that also apply to K-12, the career world, and the vast ocean of life. For me, education is about developing your personal ability to deal with the world, both intellectually and emotionally. Education isn't just about information — it's about the process of finding and using information as you navigate the world. It's about learning to think for yourself.
In my eyes, education should help you with four key skills:
Identify Problems: when you walk into a situation, you should be able to know if something is wrong. You might not have a solution, but you should be aware that something needs to change. And this applies not only to concrete tasks like building cars or making a new business profitable, but to social issues such as ensuring faireness for all individuals. For example, why are so many American veterans homeless? Not everyone sees problems like this — and a problem cannot be solved unless it's first recognized.
Find Relevant and Accurate Information: When you face a problem, you don't necessarily understand it. You should be able to research the issue, find sources describing the problem — and then you need to choose which information is acurrate, and which information is misleading. Some people are experts, some people pretend to know — and others will outright lie to you. An education should help you learn to tell the difference. Would you listen to a homeless veteran? Would you know how to find one to talk to? Can you identify those organizations that are trying to to end homelessness?
Consider Your Own Ignorance: No one knows everything. No one. An education should help you see that the world is complex, and that the "simple" answers are generally missing something. And education cannot give you all the answers — nothing can give you all the answers. But an education should help you see the the shortcomings in your experience. It should help you realize your blindspots — and that, in turn, will motivate you to find more information rather than jump into a bad decision. Yes, I feel we should provide homes for everyone who's homeless — but I have no idea how to get that kind of funding. As a veteran, I don't even know how to connect homeless veterans to resources through the Department of Veterans Affairs — and I have no clue what resources might be available for those who don't qualify for VA support.
Make Good Decisions: A good decision isn't always the "right" decision — in some cases, there is no "right" choice among all the bad options. But with an education, you should be able to weigh your options and then use the collected information to make good choices. Personally, I have to live with the fact that I don't have the personal resources somehow "fix" homelessness in America — I don't have any good choices to actually address this issue. But I can decide whether or not to ignore these issues. I can inform others about what I do know. And as a teacher, I can do one tiny thing: tell my students to pay attention. And hope that they go forth with better options in the future than I have now.
AI can't do your thinking for you — but I've had many students turn in AI papers as if they'd done the work themselves. Why do they do this? Because for these students, education isn't about training your brain — they see education as simply a mile marker on the road of life. For some students, it isn't about how you did the paper — it's simply about the grade it got, and whether that grade gets them one step closer to a diploma.
That kind of thinking defeats the point of education. If you earn an A in a class without actually reading the material or writing any of your own words, then what was the point of taking the class? It certainly won't help you in the real world. In the land of jobs and family and paying the rent, you don't have a teacher telling you which prompts to enter into the AI.
Setting aside the issue of how students may use AI, the technology poses a unique dilemma for teaching: it has the potential to augment teaching to such a degree that it could replace teachers, at least on paper. As Beth McMurtrie describes in her article "AI Has Joined the Faculty," some faculty feel that AI can save time through content generation or as a "thought partner" in developing courses while others fear it will be used to cut costs by reducing the number of faculty on the payroll — and yet AI's content errors undermine many faculty efforts to use it a course tool.
For me, this creates both pedagogical and ethical issues. If AI still creates errors that require correction by human instructors, then it clearly can't teach on its own. But the technology is clearly effective enough to aid teaching — and those who are unfamiliar with teaching may mistakenly believe it can replace human mentorship. Teachers at all levels of American education already face punishing expectations for grading, student numbers, and low pay, yet institutions continue seeking new ways to cut instructional costs while maintaining the impression of quality. This creates external pressures to use AI for teaching regardless of whether it actually helps students learn.
Clearly, we need actual research to determine how or if AI should be used in the classroom, and I strongly believe that every course will have differing answers to these questions. AI is obviously changing our society, and it will become a fixture of the American future — it's simply too useful to be abandoned. But as we face this new technological landscape, students and educators need to decide how best to address the technology in order to maximize learning.
AI can offer helpful information for navigating the world, but it can also hallucinate "facts" that don't exist.
AI and Identifying Problems
One of the strenghts of AI is the access to massive amounts of data — AI possesses broad information across a vast array of topics, so it can often describe situations issues you've never heard of. If you tried starting a business, for example, AI could tell you about challenges that simply weren't on your radar, everything from payroll, small business loans, labor laws, trademark protections, IRS filings . . . and so on. You might not realize that the perfect name for your future company is already taken — and an AI might tell you about the need to check.
But the downside? AI is limited to information that's available in its database. Is this particular AI giving you information from Wikipedia? From a business journal? Or a bunch of blog articles written by wannabe CEOs? AI also doesn't experience businesses the way people do. If you ask AI the wrong questions, it will give you unhelpful responses. A person who's started a business, on the other hand, might know which questions to ask you about your business idea.
AI and Finding Useful Information
Never underestimate the vast LLM databases — an AI platforms can access mountains of information far faster than you could ever hope to read through everything. Because of this processing speed, AI can offer a lot of information in response to a simple prompt. And then it can refine that information based on your follow-up questions, which allows you to get very specific and detailed information that directly applies to your situation.
Unfortunately, AI can make major factual mistakes. When an AI sifts through those databases, it can line up the wrong pieces of information, and then hallucinate something brand new that doesn't actually exist. And a lot depends on how that database was created. If an AI is given poor sources of information, then the AI's answers will give you the same misinformation — and you won't know this unless the AI tells you which sources it used. This is one advantage that human-written sources have — you can evaluate the qualifications of the author, and then decide if that person is someone you can trust.
Just as AI affects what information you see, it may affect your decision-making. Don't let technology drag you into the trap of overconfidence.
AI and Addressing Your Own Ignorance
If you don't understand an issue, AI can quickly provide a complete and detailed overview about that topic. It can easily "fill in the gaps" of your personal knowledge. But in filling those gaps, AI can also provide a false sense of confidence. It can give inaccurate information with a sense of complete confidence, as if the hallucinated facts are absolute and unchanging.
That said, this isn't simply an AI issue — many people will also make sweeping declarations without any research. They might be misinformed, overconfident, or outright lying. But unlike AI, individual people can be evaluated for accuracy. You can check their qualifications and their admissions of ignorance. Does this person have experience running a business? Was this person a successful business owner? And is this person acting as if they have all the answers, or are they able to accept the uncertainties that come with any new enterprise?
AI and Making Decisions
AI has access to information, but it doesn't have to live with the consequences of a bad decisions. Sure, AI models are continuously tweaked to be more and more useful, but they aren't "punished" for being wrong. If an AI that tells you to spend your life savings on a doomed business idea, it won't be the AI who's filing for bankruptcy.
Given AI's access to data and the confidence it displays in sharing information, it's tempting to believe that AI will make the best possible decisions. But making decisions is a process, and we can only learn that process through experience — if you regularly allow AI to make decisions for you, then you'll eventually lose that ability to make the right call. You have to accept that it's okay to make bad decisions sometimes — that's a part of life, and it's how we learn. Overreliance of AI can rob you of the ability to make your own choices.
Ultimately, you are responsible for the decisions you make — and if you use AI, then you need the judgement to decide if AI has offered a good suggestion.
Refusing GenAI in Writing Studies by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes
"Grok May Be an Early Sign of AI's Ability to Mislead" by @reviewraccoon in Medium
"These Women Tried to Warn Us About AI" by Lorena O'Neil in Rolling Stone
"AI Has Joined the Faculty" by Beth McMurtrie in The Chronicle of Higher Education