Understanding AI's potential impact on schools and teaching media … – Delaware First Media

As Delaware moves forward in establishing new standards for teaching media literacy in its public schools, educators are starting to grapple with how another technological advance has begun to infiltrate their classrooms: artificial intelligence.
When the state’s lawmakers passed S.B. 195 last year, directing the state Department of Education to develop media literacy standards, artificial intelligence, now almost universally referred to as “AI,” was not drawing much attention in the education arena.
While the State Board of Education is expected to vote at its Sept. 21 meeting on whether to implement those new standards for the 2024-25 school year, the introduction and widespread use of AI tools like ChatGPT in recent months have added another layer to the media literacy conversation.
“This legislation [S.B. 195] is almost archaic compared to what is happening now,” says Jeff Menzer, superintendent of the Colonial School District.
“People are just starting to understand this [digital] space, and now it’s being set on its head,” Menzer says. “It’s definitely unsettling to think about the capabilities of technology, of what students experience, if you don’t understand the technology itself.”
Citing Facebook, Twitter, Tik Tok and Wikipedia as examples, Harry Brake, library media specialist at Woodbridge High School and outgoing president of the Delaware Association of School Librarians, says it’s not unusual for new technological concepts to initially draw a negative reaction that eventually becomes more positive.
The debate over AI has created a measure of polarization within the education community and between generations while heightening fears in some circles that it can become a powerful tool for spewing misinformation and disinformation.

What is Artificial Intelligence

Artificial intelligence, the ability of machines to perform tasks that are typically associated with human intelligence, such as learning and problem-solving, has been around for years. (It became an academic discipline in 1956.) But its use has taken off in the past decade, with the development of tools that have become part of daily life — navigation apps, facial recognition, social media, voice assistants, search engines, smartwatches. Many industries, including health care, transportation, the military, finance and telecommunications, have multiple uses for AI.
But only within the past year has AI become a hot topic in the K-12 education sphere. That’s because of the arrival of ChatGPT, which provided this writer with this simple definition of itself: “ChatGPT is a computer program powered by artificial intelligence that can have text-based conversations with users, providing information, answering questions, and engaging in natural language interactions.”

Why all the fuss?

The release of ChatGPT for public use last November almost instantly stirred up a ruckus. By February, school systems in New York City and Seattle, among others, had banned its use. Critics noted that the information generated by prompts to Chat GPT was not always accurate. Doomsayers predicted that widespread use of the tool could spell the end of high school English classes as they’re now taught.
Meanwhile, Code.org, an education innovation nonprofit whose backers include ETS, the organization behind the SAT and other standardized tests, and the International Society of Technology in Education, one of the resources for Delaware’s digital media standards, has launched AI 101, a series of online programs to help teachers become familiar with artificial intelligence.
Brake has found other online resources including NowComment.com and YouthVoices.live that can help educators and students smooth their transition into using artificial intelligence tools.
Nevertheless, teachers remain uncertain about AI’s impact. Over the summer, researchers for the Education Week newsletter surveyed more than 1,300 educators nationwide, asking how they thought AI would affect their school or district in the next five years. The results: 49 percent said its impact would be “equally negative and positive”; 28 percent said “mostly negative”; 13 percent said “mostly positive” and 10 percent said “no impact.”
Some educators have worried that students, once they learned the basics of ChatGPT, would use the tool to write their research papers for them, and that teachers might not be able to tell whether the paper was written by the student or a computer program. In a more basic sense, some teachers are worried that reliance on AI will stunt students’ learning.
Others say that AI, if used appropriately, will help students improve their understanding of the subjects they’re being taught.
“This has all come so quickly, this giant panic. We have to talk a lot of people off the ledge,” says Christina Scheffel, an instructional technology specialist in the Indian River School District and a member of the task force that helped develop the media literacy standards. “I don’t know everything about AI. I don’t know the good or the harm, but we do know what’s wrong: using ChatGPT to write a paper.”
But maybe it isn’t entirely wrong, as Scheffel herself admits. There’s a difference, she acknowledges, between using AI tools to generate a list of sources when doing research for a term paper and actually using AI to write the paper.
Fears over AI’s potential to dominate the realm of cyberspace information go beyond whether students can get away with using it to write reports for them. Those fears also follow many of the same lines as expressed over the accuracy of any information found online.
“Technology has evolved faster than humans,” says Yun Fei Lou, a member of the Christina Board of Education. “We have serious difficulty differentiating between what is real and what is potentially dangerous.”
“I’m worried sick about it,” says Jonathan Rauch, author of “The Constitution of Knowledge” and a senior fellow at the Brookings Institution.
“Experts say this is a very powerful tool for targeted disinformation and misinformation,” he said in an interview. “It’s extremely good at customizing to individuals, in voices that are appealing to them, so it’s impossible to know who you are listening to.”
Even if the information derived through AI is accurate, there’s the issue of whether the act of securing the information results in actual learning.
“But the more I watched my children, the more it became clear to me that, while AI can assist in getting information to a learner, it cannot do the thinking for them — it cannot help them truly learn,” Rina Bliss, a sociology professor at Rutgers University, wrote in a Washington Post opinion column in April. “AI doesn’t compel students to think through or retain anything. And simply being fed facts and information is not the same as ‘learning.’”
Others have a more optimistic view. Writing in the Inside Higher Ed newsletter, Marc Watkins, a lecturer at the University of Mississippi, predicted that “students, and likely you, will come to use this technology to augment the writing process, not replace it.”
Also, Watkins contends that “in regions and communities where access to quality education is limited … AI writing assistants have the potential to narrow the gap between under-resourced schools and more affluent ones and could have an immense impact on equity.”
As AI tools develop to better interact with the individuals who are using them, they could well have a positive impact on reducing educational inequities, agrees Ryan Curl, an educational technology specialist in the Woodbridge School District. “And it could help provide students with better choices in life,” he adds.

AI in the lower grades

Even at the elementary level, some students are already familiar with ChatGPT. Bonnie Gaus, librarian at McVey Elementary in the Christina School District, tells of a fifth grader joking with her last year that he would use the tool to write his next paper. Her reply: “We know your writing style. We’ll know it’s not you.”
Students in primary grades are already figuring out how to use AI tools to their advantage. Gaus described another incident, when students were engaged in remote learning during the COVID-19 pandemic. A teacher was giving a spelling quiz and the child learning at home turned away from the screen and asked the Amazon Alexa app nearby how to spell the word.
Lou, who has two sons in Christina’s West Park Elementary School, says students, like their parents, “have to use technology without becoming too reliant on it.” In this context, he sees ways that using AI can benefit his children. His 5-year-old, for example, can use an AI tool to write a story and learn about sentence structure by seeing the finished product. “Later on,” he adds, “they’ll be directing AI,” doing things like giving it instructions to enhance their research.

Do Delaware’s standards address AI?

The words “artificial intelligence” do not appear in the legislation that mandated the adoption of digital literacy standards for public schools. Nor is there any direct reference to AI in the draft standards, which are scheduled to be considered for approval on Sept. 21 by the State Board of Education.
The committee that developed the standards did not specifically address artificial intelligence issues because the term was not used in S.B. 195, a state Department of Education spokesperson said this week.
In an interview, state Sen. Sarah McBride, D-Wilmington, the lead sponsor of the legislation, cited artificial intelligence as one of many complicating factors in the ever-changing digital world. “We have to keep up with best practices, be better prepared to navigate this environment.”
The state Department of Education has provided resources to and promoted discussions among key digital learning personnel at districts and charter schools but has not issued any statements specific to the use of AI, the department spokesperson said.
While lacking specific references to AI, the draft standards do, to some extent, anticipate its arrival in the classroom.
For example, one of the standards on accessing, analyzing, evaluating and creating information calls on students to “responsibly repurpose or remix digital resources into new creations.” Another standard refers to “ethically using and reproducing others’ work.”
When some of these phrases were added to the standards, they were meant to relate to students’ excerpting or copying materials found online, said Brake, a member of the committee that drafted the standards. “We weren’t considering AI at that point, but the wording is relevant to AI,” he said.
According to the Department of Education, there are no current plans to broaden the standards to specifically address artificial intelligence, but the General Assembly could pass legislation requesting changes.
Brake says the new standards don’t have to be changed now to incorporate AI-related issues. “We have a framework. It depends on how it’s interpreted, how it’s carried out,” he says.
The arrival of AI, as with some other tech advances, has, not unexpectedly, revealed a generational divide.
Many Woodbridge parents, Brake says, feel there’s too much technology available to be able to use it all safely. “They’re very worried and nervous,” he says.
Students are more likely to accept and adapt to the new tools, Brake says. “They say, ‘that’s the way it is now.’”