More Human: On Science Fairs, Artificial Intelligence, and Educational Experiments

When I was in sixth grade, I did a science fair project about AI. It was 2004, the year the iPod mini was released, the year Google launched Gmail, and the year Facebook was founded by students at Harvard University. At age 12, I didn’t know enough about artificial intelligence to even imagine it as a topic for a science fair, but another parent at the school was a computer science professor and offered to help a student do a project about the technology. My dad—also a professor but less interested in shepherding a science fair project—encouraged me to take the opportunity.

The same year I did my science fair project on AI, two robotic rovers named “Opportunity” and “Spirit” landed on the surface of Mars. While their initial mission was to spend three months exploring the planet, they would end up continuing their driverless journey far longer, with Opportunity’s AI-driven experiment lasting an almost unfathomable fourteen years.

I don’t remember knowing about the Mars rovers back in sixth grade, but I do remember standing beside my tri-fold presentation board, with my AI project running on a laptop in front of it. On the screen was a little dot bumping its way around a checkerboard maze.

I remember manipulating the layout of the squares, frustrating the little moving dot, blocking it, freeing it, boxing it in. Block-block-space-block-space-block-space. The computer was learning, the professor explained, though it felt more like I was confusing it. Most interesting to me was a slider on the side of the simulation that allowed me to adjust something called “scholasticity.” It was a measure of how willing the little dot would be to take a risk, to get creative, to try something new.

When I look up the word “scholasticity” today, it seems my memory is wrong. Google’s AI search helpfully suggests that I might be thinking of Scholastic, the education company responsible for the book fairs of my youth. It takes some creative thinking for me to correct both Google’s AI and my own decades-old memory. The word, it turns out, is stochasticity, which perhaps I misremembered due to its similarity to those childhood book fairs. Google’s AI Overview confirms what sixth-grade me knew:

Stochasticity refers to the inherent randomness or unpredictable variability within a system, even when governed by underlying rules, meaning exact outcomes aren’t certain. 

While I no longer have that tri-fold presentation board, the laptop, or the computer program, I do still have an archive of digital files from my elementary years. Nested within the folder labelled “gr. 6” is a folder called “science fair.” Inside, the word stochasticity is nowhere to be found, nor is the word scholasticity. But in a file called SCIENCE FAIR.doc, I find a chaos of seemingly random Word Art letters, their fill gradients and drop shadows every bit as luminous as I remember:

         R F I I I C C L L L A N N T E G E T A I I E

Though I’m sure AI would get there faster, I find pleasure in the brief moments of unscrambling them.: These are the letters to spell the title of my project, the letters I must have cut out carefully and taped across the top of my trifold presentation board:  ARTIFICIAL INTELLIGENCE.

         The other documents in my grade six science fair folder are less colorful. There’s an Acknowledgements page that thanks the computer science professor for his help on my project. There’s a document titled “AI Questions,” where each of the questions I typed are followed by many blank ruled lines, as if to signal my lack of answers. There’s a document called “AI rough 01,” a synthesis of the research I did, punctuated by some carefully chosen clip art. In a separate file are the copy-pasted Wikipedia sources I used: “ AI,” “The Turing Test,” and the chess-playing computer “Deep Blue.” At the time, Wikipedia was just three years old, and our teachers hadn’t yet begun telling us that it shouldn’t be trusted as a source. I was decades away from teaching my own students that there were good reasons to use and even cite Wikipedia, despite what their previous teachers might have insisted.

*

I didn’t grow up to be a computer science professor—I wound up teaching writing. My first-year composition students come to me much like that little dot in my sixth-grade science fair demonstration. They’ve spent years bumping around on a grid their teachers keep rearranging, and they’ve learned some things about how to navigate it. They organize their ideas neatly in five paragraphs; they avoid the pronoun “I.” They adopt a formal style with complex sentences and sophisticated words. When they arrive in my classroom, I can keep rearranging those squares around them, leaving one path, or none, or many. I can continue to dictate what kinds of writing they can see and reach. But what I really want to do is to move that slider labelled stochasticity. I want them to try a path that was blocked before, or that I haven’t imagined yet.

In the past few years, I’ve had countless conversations with teachers about how we might respond to AI–how we should react or adapt to what feels like a cataclysmic change. But the game we’re playing isn’t new, and many of our responses still amount to moving blocks around on a grid. There are things we know will make students less likely to turn to AI, and many of them happen to align with good writing pedagogy: scaffolding writing projects with proposals and drafts, talking with students about their writing in conferences or workshops, and asking students to connect their writing to their own local contexts and lives. There are things we know will make students more likely to turn to AI, and many of them happen to align with the failings of the education system we have raised them through: the feeling of being a bad writer, the pressure to conform to standardized English, the belief that our goal is a polished product, a grade.

AI is trained to choose the most likely and likable path—which is also a fairly accurate way to describe what we’ve been training our students to do.

*

It was halfway through Spring 2024 when I ran my first AI experiment since sixth grade. I was teaching a first-year composition class themed around autobiographical writing, and despite the explicit connections to students’ lives in every assignment, and my emphasis on drafting processes, I had begun to see submissions that looked suspiciously like AI.

Determined to address the problem, I took a draft I was writing alongside my students and asked ChatGPT to write an essay on the same topic. I printed them both, without any indication of their authorship. On one side of the paper was an essay was called “Daffodils and Okra.” On the other side was “A Tapestry of Daffodils and Okra: Unraveling my Lincoln, NE Experience.” In class, I passed out the copies, asking my students to read the two essays and decide which they preferred. As they skimmed through the pieces, I began to hesitate. Was this an unkind trap I’d laid? As a matter of practice, I try not to trick my students. But they already had the essays, so my experiment had to continue.

If you’d pushed me to articulate a hypothesis before beginning this experiment, I would have guessed that my students would see through my plan. I assumed they’d all prefer the essay I wrote, and furthermore, that they’d call out the use of ChatGPT. But what I found in our discussion was more surprising. While “Daffodils and Okra” won by majority vote, a significant number of students preferred ChatGPT’s “Tapestry.” 

The students who preferred ChatGPT’s writing pointed out that it felt universal; they admired the meaning and depth of sentences like “As I delve deeper into Lincoln’s cultural tapestry, I yearn to uncover the stories behind these symbols.” As I listened to my students describe their reasons to prefer ChatGPT’s writing, I heard phrases I recognized from my own teaching, words like “organization,” “diction,” and “theme.” They preferred ChatGPT’s writing, I realized, because they had been taught to. 

As I listened to my students describe their reasons to prefer my draft, their language was less certain. They were grasping for ways to articulate what it was about the writing that still seemed valuable to them, reaching for language they hadn’t been given in their years of writing education. Eventually, one of my students said, “It just feels more human.” I wrote the words up on the board.

When I told my students one of the essays had been written by ChatGPT and asked them which one, there was unanimous agreement. But it took longer to parse out how they knew. They pointed to words like “indelible” and “adorning” and “verdant,” vocabulary that just moments ago they had admired. We found hallucinations: a reference to a local daffodil parade that sounded lovely but didn’t exist. One student noticed that ChatGPT knew plentyabout daffodils, but when it came to okra, it suddenly became more vague. This, we hypothesized, might have to do with biased datasets and race. Dutifully, my students showed me they were learning the lesson I had set out to teach. They found language to describe the failings of AI-generated writing. They found language to deliver the answer I was looking for.

If I was presenting this experiment at a science fair, I’d have to begin by admitting that my hypothesis was wrong. My students didn’t universally prefer my writing—at least not until I taught them to. To create a more interesting presentation, though, I’d have to admit that my question was wrong. The more important result of this experiment was not which essay students preferred, but why they preferred it, and what language they used to describe their values.

I left class that day thinking not about ChatGPT’s words, but about my student’s. More human. More human..

*

My fifth-grade science experiment was inspired by the early 2000s resurgence of seventies style. It was 2003, the year that Blu-ray technology came out, along with classics like Finding Nemo, Pirates of the Caribbean, and Elf. We wore bell bottom jeans and hung bead curtains across our bedroom doors. Somehow, a friend and I got it into our heads that for the science fair, we should try to make a lava lamp.

When compared to that little grid-bound dot that would be the centerpiece of my science fair project the following year, this lava lamp idea feels far more exciting. There is no controlling the shifting globs of a lava lamp—their stretch, their rise, their fall. Though governed by some laws of physics and chemistry, those bubbles are captivating precisely because of their unpredictability.

The project seemed simple enough: even at age 11, we understood the underlying principles of immiscibility and density. We needed two liquids that wouldn’t combine, and heat to make the bubbles expand and rise. We consulted the internet for ideas, but the web was still in an early enough stage that searches on Ask Jeeves for “how to make a lava lamp” came up mostly empty. Today, when I try the search, AI is quick to design the science fair project for me. Still,  my digital archive strikes me as far more interesting.

We relied on a website called oozinggoo.com, along with a fair bit of trial and error. While unfortunately the website is no longer an active domain, I can only imagine that it would not meet the criteria I teach my students to consider for credibility. I doubt the website included any details about its authorship, where the information came from, or why we might choose to trust it. Still, from the notes I saved, it seems it didn’t lead us too dangerously astray. Oozinggoo.com recommended that the two liquids should be “non-flammable (for safety)” and also “not very poisonous (for safety).” Following this advice, we used brine and benzyl alcohol, dying the latter with ink from a permanent marker we cracked open. According to the notes we typed and taped to our presentation board, we built a base for the lava lamp with a tin can, some fabric, a light fixture, and a circle of wood.

Although it’s nowhere in the document, I remember two more important details from our experiment. First, we had to add an ice pack to the top of the glass vessel, because after running for the length of a science fair, the whole project became far too hot, and the sharpie-dyed bubbles all hovered at the top, refusing to fall. As it got hotter, our project also began to emit a strange smell. We ignored it until a chemist parent stopped by our display and started asking questions about the liquids we’d used. It turns out that, while neither benzyl alcohol nor brine were flammable or “very poisonous” on their own, when heated, they underwent a chemical reaction that produced an odorous gas. After consultation with the chemist, we turned the lava lamp off for good.

Looking back, it strikes me as inadvisable for fifth graders to undertake any kind of chemistry involving heat and glass, especially when the research is based on questionable sources like oozinggoo.com. But the learning experience was a memorable one, and we were proud of what we managed to create.

As part of our tri-fold display, we included a sign that read “PLEEEEEASE DO NOT TOUCH” alongside a clip art of a knock-off Donald Duck about to bring a hammer down on a late-90s PC. We also created a multiple-choice quiz for science fair visitors, with reading comprehension questions to test their understanding of our project.

         Sodium chloride is:

a) A toxic chemical

b) A delicious treat

c) Table salt

d) The name of my dog

We promised to grade them on the spot. Already we had begun mimicking the forms we knew our learning was expected to take. A quiz couldn’t capture our creative process of trial and error, so we focused on testable facts. Block-block-space-block. The multiple-choice answers were not possibilities, they were traps.

*

If my first teaching experiment about AI taught me to consider why my students were turning to the tool, my second experiment came from a desire to learn more about how they were using it. It was the same step-by-step reasoning I’d been taught to apply in my science fair projects—I was trying to isolate variables, to understand this shifting thing.

In fall 2024, in my first-year composition syllabus, I told students:

Because AI writing is new, teachers and students alike are still developing the literacies we need to make decisions about how and when to use it, and how it may impact learning. If you have ideas for how AI could support your work in this class, talk to me about it first.

It was my attempt to treat the question openly, as an experiment. I hoped it would give me a chance to dialogue with students about how they wanted to use AI, and make decisions about what to permit and prohibit in conversation with them.

Perhaps predictably, it didn’t lead to my intended result. Many of my students were computer science majors, and asked questions I wasn’t prepared to answer with my limited understanding of the technology. When students were brave enough to suggest uses of AI, from brainstorming to proofreading, I either shot them down or voiced so much hesitation that my students took it as a no.

When, drawing on my previous experiment, I asked students to analyze a ChatGPT-generated essay in class, they had a rousing discussion critiquing the hollow prose. They called it robotic, pretentious, wordy, surface level, and redundant. One student compared it to a terrible movie plot, while another described it as “the definition of feeling instead of actual feeling.” At the time, I was thrilled by the discussion.

But now, I have to consider the possibility that this discussion, too, was my students performing a “right answer.” They knew what an English professor would desire of them, and so they leaned right in. Performance or not, our conversation surely cemented my students’ impression that my invitation to experiment with AI was really more of a trap. Block-block-space-block-block.

As the discussion wound down, there was one brave student who surprised me. Her peers had been critiquing ChatGPT’s conclusion for being dull, repetitive, and overly general. “What if I write my conclusions that way too?” she asked. It was an opening, ever so small, for us to speak more honestly. I hypothesized that she had been taught some of the same writing patterns that ChatGPT had learned, and she agreed. I suggested that, if she wanted, she could learn other ways to conclude her essays, ways ChatGPT didn’t know.

By the time we got to their final assignment, where I asked my students to craft their own argument about AI writing in college, they determined that I was firmly in the “con” camp. This assumption gave me an ideal opportunity to help the class move beyond pro-con arguments, pointing to the limitations of a conversation that has been largely characterized by this binary view. But it didn’t teach me much about how my students were using AI. Even when I encouraged them to make their arguments more persuasive by including examples from their own lives as students, they were hesitant. They understood that the more important rhetorical task was not persuading a fictional audience of their argument, but rather, persuading their professor of their credibility as students and writers.

Late in the semester, in a bid to earn their trust, I designed a class activity asking students to generate potential titles using AI. If I wanted them to treat it as an experiment, I needed to signal that I was open to the experiment too. This time, I was the one who saw my writing reflected in the results. Every single title generated by ChatGPT, or SnapAI, or the other tools students chose to try, followed the same pattern: a brief catchy phrase, followed by a colon and a longer explanatory title. “What if I write my titles that way too?” I asked them.

My experiments that semester didn’t teach me how my students were using AI, but they did teach me how my students were reading teachers’ reactions to it. One student confessed that they’d begun to adjust their writing style to avoid being falsely flagged by AI detection software. I had commented on their use of shorter sentences and fragments, and suggested that they experiment with varying their sentence lengths. Politely, they declined, preferring the safety that their new writing strategy provided. This was far from what I’d dreamed of when, a semester earlier, I had the discussion with my students about writing that was more human. If there was a slider for stochasticity, we had been sliding it down to zero. My students were learning exactly what we were teaching them to do. Block-space-block-block-block.

*

The first science fair project I remember, and the earliest that appears in my nested archive of files, comes from third grade, though it was a culmination of several years of curiosity. Growing up in Edmonton, most of my experimentation took place not in science fairs, but outside in the snow. I learned the different consistencies of powder, slush, and ice. I learned which were heaviest to shovel, which would make the softest snowball, and which were best for building snow forts. After hours spent building the walls and rooms of my snowy palace, I needed something to adorn its frosty architecture. I froze water in yogurt containers and ice cream tubs, adding a few drops of food coloring to turn them into jewels.

It was the year 2000. We had just survived Y2K without collapse, our computers rolling over into a future the engineers had failed to imagine or to plan for. I was eight years old, still saving my school projects on a 3 ½” floppy disk, quarreling with my classmates over who would get the best color, and marveling at its inner components that were visible through the translucent plastic. 

In my science fair folder from that year, there is only one document, titled, “The Problem.” In that file, I outlined my experiment in full, beginning with the research question:

For a few years I’ve been making ice crystals by putting water in yogurt containers and ice cream tubs and adding a bit of food coloring. But the food coloring concentrated in the middle. So this year as a science project, I am trying to figure out:

·  why that happened?

·  and how I can make it so there is color all around?

Even in my elementary, third-grade prose, I can see the mold I was trying to fit my curiosity into—mimicking the step-by-step logical syntax of scientific writing, bullet points and all.

Although I described it as “food coloring concentrated in the middle,” the image in my memory is more vivid. In the center of the clouded blocks of ice, food coloring blooms: a suspended color, one that might call forward to the lava lamp that I would create in another two years. But the edges of this frozen color are softer, more like a teabag plunged into hot water that has only just begun to diffuse. I remember the way the color concentrated in the middle, and I remember that it was beautiful. I remember I wanted the block of ice to be homogeneous, colored all the same.

My methods section was, in retrospect, quite impressive. I broke the problem down into two parts, each of which had two “ideas” (what a more mature scientist might call hypotheses). First, I tried to determine why the problem was happening. Then, I tried to figure out how to solve it.

This, too, is the approach that I’ve tried to take with my experiments in teaching. I’ve treated AI as “The Problem.” I’ve tried to understand why the problem is happening, and then I’ve tried to solve it. But whenever I try to move into the solving part, I start adding blocks to the grid, and my experiment fails. Every time I try to solve it, I discover another variable I haven’t yet accounted for, and I’m returned again to the first part of my third-grade science fair method, the why.

After learning the many ways my students were reading my teacherly perspective on AI, the next semester, I removed my syllabus policy, and decided we’d create our AI guidelines together. We began our discussion not with their work, but with mine. We talked about whether I should be permitted to use AI for grading (no), for assigning small groups (yes), or for developing lesson plans (maybe). Most of the possibilities we considered became more complex as we talked through them. One student suggested that I could use AI to write lesson plans “on days that you’re really tired,” an act of generosity that took me aback .

“What if I’m tired every day?” I replied, half joking, but wanting us to consider the idea to its potential conclusion. Another student expressed concern that AI would write lesson plans that were too challenging for them as students. They wanted lesson plans that understood them as writers, their strengths and needs. This, they knew, was something that AI couldn’t do.

We worked our way to the meta question: should I be allowed to use AI to detect their AI use? Only if I relied on other measures too, my students decided. They knew, as well as I did, the risks of false flags. Have a conversation with us, they suggested. Read our other writing, they said. They wanted to be treated as humans, not as dots navigating a grid. 

Block-block-space-block-block-space.

We went on to discuss the ways that they would or wouldn’t be allowed to use AI, many of them mirroring the complexity and nuance that they’d afforded me. None of the guidelines we landed on were all that different from the practices that I’d been employing, but deciding them together changed the way I felt about the rules. I’d like to think it changed the way my students felt too, though I have no sure way to analyze the results of that part of the experiment.

Students are not dots, nor are they chemical solutions with stable properties. Teaching is an entirely uncontrolled experiment, at least by scientific standards. Even those who research teaching—like I do—tend to agree that it’s not possible to prove solutions or best practices because of all the contextual aspects that intervene. The variables I can manipulate most readily are in myself, in my own thinking. And that’s what returning to my childhood science fair projects does for me.

In addition to teaching writing, I teach other teachers too. Over the past few years, many of them have come to me hoping for a solution to “The Problem” of AI. Instead of answering their question, I tell them about my experiments. “I’ve changed my mind every semester,” I say, and talk about the things I’ve tried, the ways I’ve been surprised, and the solutions that have failed. I hope they’ll begin to see their own efforts, frustrations, and failings as experiments too. Developing responses to AI in writing classrooms has often felt overwhelming, and discouraging, and infuriating, and sad. Late in the semester, when my students are tired, and when I am too, the signs of AI in my students’ writing can be enough to make me forget all the human work we’ve done. It is harder to remember that I don’t want to control my students, that what I care about is their experience and their words. I say it over and over in my mind: More human. More human. We are in this experiment together, trying to figure out what we value in writing, and how we want to learn..

*

I will try to understand the first question first, because if I understand why it happened, I will have a better clue for figuring out how to make the color go all around.

In third grade, I had two “ideas” or hypotheses for why the color didn’t spread. The first was that the liquids behaved like oil and water. Though I didn’t yet know the word immiscibility, I knew enough to make a plan: “I can just make the original food coloring mixture, and don’t set it outside to freeze and watch it for awhile.

I like to imagine my 8-year-old self, patiently watching and waiting to see if food coloring would separate from water, watching and wondering and waiting and eventually concluding I was wrong.

Some teachers suggest that the only failproof solution to AI is to return to the blue books of my youth—those staple-bound exam books where we wrote unrevised essays under the watchful gaze of a teacher and a ticking clock. Those teachers may be right, at least for now. But I’d prefer to return to my childhood science fairs instead. I want solutions that are destined to fail. I want questions we truly do not know the answers to.

This year, I’m planning another experiment. I’m giving every one of my students a composition notebook. We’ll write in class, by hand. We’ll doodle, draw, and dream. They’ll never turn these notebooks in. I want to open space for unpredictability. I’m moving that slider for stochasticity. I want my students to experience a part of writing that feels wildly different from the writing that both they and ChatGPT have been taught to value. I want to share a part of writing that is important to me, and that I’m not sure they’ve really been allowed to try. I have no hypothesis, only a question: what will change if we lean into this more human form of writing?

Experiments take patience, they take time. Often the questions we set out with are not the ones that matter most in the end. That AI Rover called Opportunity bumped along the surface of Mars for fourteen years, and while it never found the water it was looking for, it did find signs of where the water used to be. Near the end of its exploration, Opportunity began to lose its memory. When a dust storm developed in 2018, scientists lost contact with the early 2000s technology. Still, for a year, they continued to send transmissions, over 1000 unreceived commands, the last one a Billie Holiday song, “I’ll be Seeing You.” Even past this poetic ending, the experiment continues. Opportunity was succeeded by Curiosity, who was joined more recently in its mission by Perseverance. Only time will tell what these new rovers learn.

I became a writer and a teacher not because I have a penchant for pretty sentences—though of course I do—but because I like experimenting. As EdTech companies and universities rush to offer solutions and define what’s next, I am trying to slow down and keep questioning. Fourteen years from now, my students will have their own archives of their learning, their own misremembered vocabulary and remembered pride. In the meantime, they’re teaching me what I value in this work I do. I want to learn to write in more human ways. I want to teach in more human ways too.

Back in third grade, I did solve the problem. I figured out how to make the color go all around. But once I found the solution, I don’t remember using it. I continued making those blocks of ice with their strange food coloring blooms. It turns out I liked them better that way. I had to understand what else was possible and then choose it.

SHARE

IG

FB

BSKY

TH