During the Board Meeting on the night of Aug. 19, Jennifer Fry and Aaron Cook of the Technology Department presented an outline for the usage of AI within the DCS educational and professional environment. The outline intends to educate district leadership, school leadership and teachers on the possible risks and benefits of using AI within the district’s facilities, alongside how to potentially mitigate any issues that may arise from its increased adoption.
Titled “DCS AI Guidance Toolkit,” the 14 page informational packet’s stated goal is to “realize the potential benefits of AI to improve learning outcomes, support teacher instruction and quality of life and enhance educational quality.”
“By providing guidance we hope that we can improve our learner outcomes, support our teachers instruction, prevent data privacy violations and prevent inconsistent disciplinary consequences,” Fry said.
The packet proceeds to address the different types of AI systems used: Generative AI, which is artificial intelligence capable of generating text, videos, images and other data, and Predictive AI, which is used to generate statistical data from which future trends can be predicted.
Following this is the first of the guide’s five sections: Agency. It states that “any decision-making practices supported by AI must enable human intervention and ultimately rely on human approval processes.” Examples of decision-making situations given by the packet include hiring, academic assessments and interventions, alongside resource allocation.
It also calls for transparency from AI services on the functionalities of their tools and information on how they operate to permit the district to oversee its usage and its outputs. Additionally, it requests that the district would be informed of how to override any such system.
The following section, Compliance, acts to educate DCS employees of existing school, state and national policy on AI usage and AI technologies. Furthermore, it hammers at the importance of assuring any AI provider follows data protection and privacy regulations, as to assure safety of District assets alongside staff and students data.
The next section of the guide is AI Literacy. It sets forth a basic plan for technology education within the district for grades K-5, adding new content to the already existing technology applications courses throughout the district. This ranges from merely accessing an artificial assistant to understanding the limits of a computer’s understanding of humans.
The course of study development for grades 6-12 was not ready at the packet’s time of creation, though it is intended to be added in future versions of the outline.
The outline finished with a risk-benefit assessment, alongside a brief closer on the assuring of academic integrity, even with the potential introduction of AI assistants to the curriculum. Both sections emphasize the importance that AI not be abused by any student, especially being avoidant of permitting students to “submit AI-generated work as their original work.” It also echoes the old IBM slide that all potential management decisions must be finalized with human approval.
How this will be implemented remains to be seen, and some students remain skeptical of any increased presence of AI in the classroom.
“This is comparable to teaching how to rob a bank in a police academy,” senior Hudson Williams said.
Other students, though, seem receptive to the news.
“A lot of people will use AI to basically cheat…so at least they’re trying to fix [the policy,]” senior Pickle Pipher senior.