Why *not* to use LLMs in computer science education?

UncategorizedLeave a Comment on Why *not* to use LLMs in computer science education?

Why *not* to use LLMs in computer science education?

In a previous post I have tried to describe the reasons I see being used to use LLMs in CS education: 1) professionals use them, 2) LLMs can replace parts of teaching and 3) students will use them anyway so we as teachers have to somehow deal with that.

What I am missing a lot in the current discussions around LLMs are reasons to *not* use them! This too is visible from the invitation for the panel that “discussion will revolve around opportunities, challenges, and potential solutions”. In that description the only (somewhat) negative word is challenges. The things I am describing in this post aren’t challenges, things to be addressed, but fundamental issues that cannot be fixed.

So let’s dive into some, fundamental, issues that curb my enthusiasm for LLMs in many applications, including education.

Why not 1. Climate impact

According to recent research the BBC, generative AI will soon exert as much CO2 as the whole of the Netherlands. So simply said, we put all those solar panels on roofs, and all those windmills in the sea only to have it all nullified by software. And software to do what? To cure cancer? To end hunger? No software to generate cat videos and to save us from reading API documentation. On a planet that is burning and drowning, do we really find this to be the right reason to make it so much worse?

I think a very fun paper to write (maybe one day I will if I have the time) is to calculate the carbon footprint of the most recent ICSE, not in the amounts of CO2 we are burning to fly there (which are much, but can, in my eyes, be justified by science being a social process) but with all the LLM training and querying. Is it worth it?

Why not 2. Exploitative business practices

I no longer buy fast fashion, because I can’t explain to myself that I am willingly participating in the exploitation of people, in supporting their terrible working conditions (while others benefit of their labor) Instead I buy second hand, or I make my own clothes. Everyone of course is free to decide for themselves what they find ethical consumption, but using LLMs, whether you like it or not, is supporting the continuous exploitation of labor in the developing world.

In addition to exploiting underpaid and overworked content moderator, I feel LLMs are also exploiting me, personally. The Hedy repo contains maybe a hundred thousand lines of code, which I made public so that people could learn from it. Our EUPL license states, for example, that a licensee can “modify the Work, and make Derivative Works based upon the Work” which I am totally ok with, if it is done by a person, for example if someone wants to make Hedy Javascript version, they can absolutely copy my grammar and reuse the transpiler where applicable.

But open source licenses were never really designed to prevent AI usage (in retrospect, they should have!) and the EU license that we use states that “Those rights can be exercised on any media, supports and formats, whether now known or later invented, as far as the applicable law permits so.”

Does that media include gen AI? I am not a legal scholar, so I don’t really know (and I believe that in this case the jury is still out, quite literally, in a few law suits) Maybe it violates the Attribution right that states that the license information should be shared with the code, which clearly is not happening with LLMs.

But the law does not decide what I find morally correct, we all know that many things that were immoral were legal, and I feel gobbling up my source code, repacking it, separate from its intended context, and then selling it for profit, violates the informal contract I had in mind when sharing the code.1

Why not 3. Bias in output

Several recent studies have shown that LLMs exhibit large amounts of bias: simply ask GPT who made Hedy and it will not be me, but a man. Of course a logical closing of a sentence about who made a programming language is a male name, and that is just scratching the surface. Brilliant and genius are associated with men, and written text that uses African American forms of English are judged to be more lazy and dirty that white coded English. Do we want the limited progress that we have made in diversifying CS to be nullified by algorithms that will present students with 10 white men if they are about who contributed most to programming?

Why not 4. Leaning into LLMs will lead to deskilling of teachers, and diminish the value of the profession of the teaching profession

The last few decades have seen immense growth of universities; the university I went to more than doubled in size in the last 20 years (5000 students when I went there, 12.000 now). In the Netherlands, this can be attributed to two factors: 1) more international students as more BSc and MSc programs switch to English as language of instruction, and 2) more people that are eligible for higher education since more people follow “academic high school” (VWO).

Even though more staff were hired, the growth has made professors more overworked, not only because of the number of students but also because of a lower level, international students will not command English as well as Dutch people do Dutch in many cases, and more students eligible for uni will mean, like it or not, lower levels of prior knowledge. Plus of course a highly competitive academic field (esp. outside of the natural sciences) means that demands on scientific work come on top of teaching duties.

This situation creates very fertile soil for (gen) AI: if I have to grade 80 essays in a day, or if I don’t have time to update my powerpoint slides with new research, using AI suddenly seems like a reasonable or even necessary. But grading or preparing isn’t a purely productive activity, I would argue that it cannot be optimised or made more efficient, because the goal is not only to grade the essays, the goals is also to learn from what students are submitting to improve teaching, and the goal of making slides is not to make the slides, but to read a few more papers about my field and update the slides with those I find will have value for the students.

Leaning into the idea that LLMs can do the deep thinking work required will inevitably lead to less skilled teachers that are no longer learning form their students’ mistakes and from new research. Also, it will hamper activism of professors against work pressure, which traditionally has been relatively successful. In a pre-GPT era, having to grade 80 essays in a day might have led to people going on a strike (students and professors) but now that it is “possible” to use an AI, the problem is not so visible in the direct sense, only in a slow (but sure) erosion of the profession.

Soon, the Netherlands will have a right wing government, and if the polls are any indication, so will the EU, and probably the US again after November too, and those governments hate science and education and want to budget cut the hell out of us all. If we, the scientists, are already saying AI can replace us, even if we are careful about what it can and cannot do, it will be used as a reason to reduce funding even more, and we can all easily predict, without an AI, where that will lead. This holds especially true for computer scientists, who will be asked more than other about their opinions (while probably being impacted less)

Addendum

Why so few objections? This is part of a longer set of posts that are upcoming, but I am reflecting on the field A LOT lately. I am a bit of an academic nomad, going from PL to SE to CSed to PL and most recently I am doing some work in the history and philosophy of science applied to programming, mainly because I am so curious about why our field is as it is. Why am I often the only person talking about climate impact and bias? While there are, of course, a gazilion reason, a paper I read recently showed that being a social activist is a strong detractor for studying computer science2, and so it being artistic. So (compared to the general public) already very few people that care about social justice are entering our field, and then or source our culture does a great job at making care less about others.

I know I sound like a broken record for my regulars but a field that has a Von Neumann medal, named after a guy instrumental in killing hundreds of thousands of civilians does project some values on the inhabitants of that field (although many, like me for a long time, might be utterly unaware of his involvement, which is a bit of an excuse, but also another sign that we just don’t care).

  1. It is also somewhat disorienting to see a paradigm shift happening in real time. I vividly remember the fury with which professors, when I was young, hated Microsoft, because they were making MONEY off of SOFTWARE. Even if Linux did not work well and their community was toxic as hell, there is one thing that is worse and that is running a profit.

    To see a whole field do a 180 and suddenly be exited about CoPilot, which is not only software for profit, but it profit from open source software, is… something ↩︎
  2. Sax et al., Anatomy of an Enduring Gender Gap – Journal of Higher Education 2016. ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top