Felienne Hermans https://www.felienne.com Fri, 14 Mar 2025 13:54:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://i0.wp.com/www.felienne.com/wp-content/uploads/2023/10/cropped-Splash-e-004.jpg?fit=32%2C32&ssl=1 Felienne Hermans https://www.felienne.com 32 32 66196370 AI in Education, Research, and Women’s Day! – AI in Week 10 https://www.felienne.com/archives/8880 https://www.felienne.com/archives/8880#respond Sun, 09 Mar 2025 06:55:04 +0000 https://www.felienne.com/?p=8880

Why do schools and universities want to purchase AI?

Roaring language from OpenAI—they’re planning to invest a whopping 50 million under the banner NextGenAI in 15 American universities! To put that in context, they’re valued at 300 billion, so it’s only about 0.02 percent of their market value. What OpenAI gets in return is clear to me: they boost their image as a cutting-edge tech company while simultaneously undermining the credibility of higher education and scientific research.

Marc Watkins, a professor at the University of Mississippi (one of the 15 institutions involved), has often been overly positive about generative AI in education. Yet he recently penned a remarkably incisive piece asking: why do universities actually want to purchase and integrate AI solutions?

Students are already using AI all the time—and they manage just fine with free tools. Teachers looking to create lesson plans can often make do with existing solutions, for example, those integrated into Microsoft systems for which they’re already paying (and which offer some data protection guarantees). So why pay extra for AI? Watkins argues that “… institutions are doing so under the faulty assumption that buying access gives them greater control over how their users interact with AI.”

In a desperate attempt to take control of the situation—under the guise of “we have to do something, and this is something”—they go ahead and purchase systems and run pilots. The entire article is well worth a read, but that question is one we really need to ask ourselves—and our employers!

If you want a glimpse into how uninspired companies are when it comes to implementing AI in schools, just click on that ad that kept popping up on my LinkedIn all week. Follow the link and enjoy the pitch: “Enabling evidence-based decision-making in education through agentic AI.” It’s obvious the whole piece was machine-translated—you can tell from quirky details like “…the reading habits and performance of fourth-graders.” It even says “fourth-graders” (the AI must have gotten stuck on “4th graders” and kept the superscript)! Yes, a perfect way to convince us teachers that CapGemini is really the way to go!

Do students actually use AI for plagiarism?

And another assumption—that students are mass-using AI for homework assignments. It seems as obvious as day, right?! But research from Spain involving 500 students shows it’s not such a big issue. Only a small portion of students use AI to commit plagiarism, and those are typically unmotivated students who, in “the old days,” would have resorted to creative shortcuts (copying, using book summaries websites, etc.).

I often recall a colleague I once complained about—who noted that when students shared answers via WhatsApp, he said, “If many students are committing plagiarism, then you’re asking too much of them.” And now, research clearly backs that up. Students who want to learn, who see the value in an assignment and have the mental space for it, do the work themselves.

Peer review—Can it be done with AI?

Sometimes I really think I’m losing my mind, and this week that feeling hit again when I read a piece in Nature (!!) about using AI for peer review. The very first sentence reads:

“Do you ever feel that agreeing to review an academic paper guarantees a wasted workday? You’re not alone.”

Unbelievable, right? Of course, peer review takes time, and naturally I’m not thrilled when I have to do it. But that’s not a criticism of peer review itself. At its core, peer review is about contributing to the advancement of science—literally my job. The real issue is that, due to budget cuts, publication pressure, and a threefold increase in the number of students and PhD candidates, I have so many other things to do!

Really, Nature, cancel yourself. If you’re not going to stand up for the protection of the deep intellectual labor that peer review represents, then who will? (Yes, I know this a column; and Nature’s editors might argue that you should be allowed to express any opinion, but it’s Nature—it’s not just any publication, and they don’t explicitly state that it isn’t the editors’ view.)

And if it’s not about reviewing but about research—do scientists want to use AI for new discoveries? No. Google already has to answer for a host of claims regarding inventions.

And what about the people who create the algorithms themselves? Do they really want their employees using them? Also no.

And I almost forgot Women’s Day!

I was a guest at Windesheim last Thursday, giving a lecture on programming and feminism—and it was fantastic (a video will be coming in a future newsletter). As some of you might remember, last year in honor of Women’s Day I spearheaded a project about women on the radio: FemFM.

That’s why it’s especially cool that Buma/Stemra has released a new report on the state of things in the Netherlands. And, this piece is a few weeks old but still relevant! The Dutch for Human Rights has decided that Meta violates human rights by showing women different job ads than men.

How do you debate someone who does not want to listen?

A beautifully crafted, short video explains how to debate with people who say, “Yeah, but research shows that…” without actually reading the research—using it solely as an attack. The advice: shift the discussion towards values rather than cold facts. The video is very well produced!

Movie Tip!

I originally wanted to stick strictly to AI news, but, well, people contain multitudes—and besides, I wasn’t all business on Twitter back in the day, so here’s a little extra. You absolutely must not miss the film A Real Pain, for which Kieran Culkin (you know, Roman Roy—but also… Cousin Fuller in Home Alone!) has just won an Oscar. What a film—laughing out loud at a movie about the Holocaust is something only the truly great can pull off. Culkin’s counterpart, Jesse Eisenberg, who also wrote and directed the film (and, in my opinion, was undeservedly snubbed for an Oscar for screenplay), has truly delivered a unique piece.

This article was translated by Johanna Guacide.

]]>
https://www.felienne.com/archives/8880/feed 0 8880
Netflix’s Ponzi scheme, and stop using American tech platforms is possible – Week 9’s AI news https://www.felienne.com/archives/8853 https://www.felienne.com/archives/8853#respond Mon, 03 Mar 2025 09:35:27 +0000 https://www.felienne.com/?p=8853

Are you still watching?

What a piece in n+1 on Netflix’s ponzi scheme of attention that describes how Netflix has pivoted from distributor of content to creator of it, and the effect thereof on the quality of movies. According to the relentless author, Netflix is ​​”staffed by unsophisticated executives who have no plan for their movies and view them with contempt”. It’s a long read, but absolutely worth the read, because it makes you reflect on mindlessly binge-watching true crime while clearing out my inbox. Even if it is not easy in today’s firehose of stuff, we have to taking media seriously (if you can’t get enough of the piece, there’s also this this making of!).

Stories have been an important way of thinking about life since ancient times. As Brenda Laurel writes in Computers as Theatre, which I happened to read this week: “Drama was the way that Greek culture publicly thought and felt about the most important issues of humanity … they were tools for thought and discourse in the Polis”.

And you easily argue it as the role or even duty of a media maker to include this role. Bertolt Brecht believed that a play was only finished when the audience applied it to their own lives; Will Tavlin wrote in his interview for good reason: “Bad movies have serious consequences”.

By the way, the magazine n+1 is definitely worth reading. I have a subscription to the paper version myself, and I’ve have often recommended the wonderful piece An age of hyperabundance about how techbros think over the last few months.

The culture of Silicon Valley

Speaking of how techbros think. This brutal takedown in Bloomberg of the new book by Alex Karp, CEO of Palentir. Well, if Bloomberg criticizes a book about making big money this much, it must be really bad! But if you read the piece carefully, it is actually quite nuanced, in the sense that every critical question they ask is justified and deep, not at the level of “haha, look a small mistake” but pointing out gaps in the theory. It shows once again how difficult it is for Silicon Valley types to come up with clear definitions of what they criticize (this is ironic, by the way, since Karp has a bachelor’s degree in philosophy and should therefore know/be able to know better). Karp’s entire book (which I haven’t read by the way, who would after this piece?) breathes the atmosphere of Mencius Moldbug, the creepy architect of the mindset there (if you don’t know him, I would not recommend to look him up).

Closer to home I read about the recently published dissertation by Inte Gloerich about the blockchain hype (remember that?) and what it says about Silicon Valley culture. Now there is a book I would love to read!

Leave American tech platforms!

This week’s tip is: Leave American tech platforms where you can. Dutch tech hero Bert Hubert explains why that matters clearly in The Register. Convinced? Here is a handy overview of alternatives.

Earning money from fake news? It is once more possible on Facebook

TechCrunch reports that Facebook is not only going to fire their content moderators, but they are also going to start allowing people to make money off of posts that are later found to be fake . Maybe I should stop being so surprised at Facebook, but it’s still a bit of a shock every time they do something so obviously bad.

Governments worldwide use internet access as a sanction

Last week, a very worrying report was published at human rights conference RichtCon. The report explains that in 2024, more governments than ever will turn off the internet access of citizens. This also happens in regions where Starlink is used to exploit people.

The report counts at least 296 such cases in 54 countries, from Ukraine to Gaza to France , an increase of 35%. Now more than ever, time to keep fighting for free and open access to the internet, everywhere!

Good news!

We keep our spirits up, and some good news: In Arizona, it may soon be illegal to use AI to assess medical claims , passed by a vote of 58 to 0. Yay, onward!

You know who else is no longer a fan of AI? Nadella, the CEO of Microsoft (who owns half of openAI) has announced that he no longer sees the point of it. The hype is too great, he says in Futurism . If AI is really that good, he says “it’ll be clear when it starts generating measurable value”. Okay, so he has invested billions and now he just has to see it happen. Check! Elsewhere, things are apparently not going very well either, because at Google, they have to work harder, according to Brin. 60 hours a week, otherwise Google’s AI will not come to be people!

Do you, like Brin, think that AI will soon be able to do all the tasks that humans can do? Then read this great piece in the Guardian by neuroscientist MJ Crockett, which makes a clear case that human affection and creativity are not a chore that needs to be done.

And finally, the concept album Is This What We Want , by 1,000 British artists. The entire album is nothing but the sound of empty music studios, because that’s what we’ll get if we let AI into everything. What a brilliant act of protest. More of this please!

]]>
https://www.felienne.com/archives/8853/feed 0 8853
Who owns your cloned AI voice – Week 8’s AI news https://www.felienne.com/archives/8808 Sun, 23 Feb 2025 14:41:05 +0000 https://www.felienne.com/?p=8808

Well, I could say for the next few months (or even years?) that it’s been one of those weeks, but here goes!

Who owns your cloned AI voice?

This powerful piece in MIT Technology Review – especially for me, daughter of a father who died of ALS – talks about people who, due to a neurological disease, lose their muscle strength but can regain the use of their “own” voice thanks to AI. Wow, that’s truly a useful application of AI, wouldn’t you say? Can’t have anything against that, Hermans!

However… not in late-stage surveillance capitalism. Because what does this piece show? If you use badd language—say something like “get your arse down here”—you get banned! It reminded me of that article from 2022 about people whose eye implants suddenly failed because the company behind them went bankrupt.

It perfectly illustrates what I’ve mentioned in several places (including at the very end of last year’s VPRO podcast De Machine): I truly have nothing against AI per se. Someone who trains their own AI on their own voice, text, images—and then uses it in any way they see fit—has my blessing. You’re not excluding anyone, you’re not stealing anything, and you probably won’t end up any dumber; perhaps even the opposite. (And if you’re training it in the middle of the day in your house packed with solar panels—well, that’s perfectly fine; since giving back to the net doesn’t yield much these days, you might as well run a small AI!)

Speaking of voices and AI, there’s also an article in TechCrunch about software that can adjust accents live! Yes, because we certainly don’t want customers exposed to the full spectrum of ways people actually speak! They describe their mission as a “deeply human mission to break barriers and reduce discrimination.” Well, if I were to dress up as a man, I might experience less sexism, but that doesn’t make the world any less sexist.

Using AI doesn’t save you timeit makes you slower

So far, I’ve mostly reported on “regular” news, but there’s plenty happening in the scientific realm of AI as well. One popular claim is that generating text with AI saves time—but is that really so? In an insightful article, three medical researchers outline several reasons why it might not. Synthetic text reads differently than normal text and may actually require more time to process. Plus, the feeling of being responsible for a text that isn’t your own does something to you as a person. Other recent studies confirm this: one Chinese study involving over 6,500 radiologists found that using AI can increase the risk of burnout, because in rare cases a lot of time is needed to build up context by digging into the details. Another study among nearly 200 doctors communicating with patients via AI showed they spent just as much time—if not a bit more—as when doing it manually (though the difference wasn’t statistically significant).

In a completely different field—programming—we see similar results. Recent research from GitClear shows that as AI usage increases, there’s also a rise in “code churn” (lines of code that are frequently modified). The hypothesis for that is that AI-generated code isn’t as robust, and therefore needs fixing more often.

A small study one of my students at VU last year revealed something similar in education: reviewing and refining AI-generated feedback takes more time than doing it yourself. My own argument is that if you truly want to check whether something is correct, you not only have to review the AI’s text but also think about whether something important is missing—and that’s only possible if you’ve done the thinking yourself.

ChatGPT Is Getting More Conservative

Well, last week in NRC I wrote about the covert abandonment of the ambition to be neutral, and now TechCrunch reports that ChatGPT is becoming more conservative—since, of course, everything must be allowed to be said, free speech and all. It reminded me of an article I wrote in 2016 about Paul Graham; back then, he said he’d “go into resistance” if Trump won, even though he was still tight with Peter Thiel (whom we now all know as JD Vance’s mentor). I believe my piece has stood the test of time and still clearly shows why it isn’t such a good idea to allow all ideas into the public debate.

Your Book, My Book?

The Verge reports that starting February 26 (that’s next Wednesday!) you’ll no longer be able to download ebooks from your Amazon ebooks dashboard. This small change (since everyone naturally syncs over Wi-Fi) raises interesting questions about who really owns a book you’ve paid for. It makes clear that in today’s world, when I buy a book (often at a price similar to a physical book), I’m only purchasing the right to read it—not to truly own it, cut it up, pass it on, annotate it, and so on. Also, since Amazon is in “Team Trump,” it might just be that the dozens of feminist books in my library could soon disappear for being too woke. Some really wild things have happened in America lately.

So if I were to advise someone on what they absolutely must not do, it would be to quickly download all your AZW3 files and convert them to epub with something like Epubor Ultimate. I certainly didn’t do that with the no less than 146 books in my library (as they say: buying books and reading books are two entirely different hobbies).

Hype Is Necessary in Silicon Valley

This interview with Meredith Whitaker from Signal in 2023 was widely shared last week, but I’m including it again because of her brilliant take on hype:

“Venture capital looks at valuations and growth, not necessarily at profit or revenue. So you don’t actually have to invest in technology that works, or that even makes a profit, you simply have to have a narrative that is compelling enough to float those valuations.”

Hype and growth are essential—it’s not just about making “plain” profits. It reminded me of that disheartening article (also from 2023) about the Instant Pot, which explains well that we’re not dealing with ordinary capitalism where companies merely want to make money, but with shareholder capitalism (see also the work of the inimitable Ed Zitron).

A Quirky (Online) Nerd Conference

A little treat for the nerds among you! Next week I’m speaking at the online conference HYTRADBOI, and the other talks look really fun too. There are still tickets available!

Good News!

Now for some good news! It might not be earth-shattering, but it’s certainly amusing: the company behind the humane AI pin—a sort of “ChatGPT brooch”—is shutting down because they couldn’t get it off the ground and people didn’t think it was worth the money. Well, who could have predicted that? (Also, remember Google Glass.)

And we could all use a bit of positive government news! Because the government might not be great, but perhaps no government is even worse. Here are nine examples of successful government policies, along with a thoughtful analysis of how Rwanda managed to contain the Marburg virus through swift action (I could comment on America and bird flu, but this is the good news section…).

If you’d like to read more about how governments can actually be pretty decent, then the book The Entrepreneurial State is highly recommended!

This post is a translation of the original newsletter, published in Dutch.

]]>
8808
Scarlett vs. Deepfakes, Math In or Out of the Classroom, and Online Dating – Week 7’s AI News https://www.felienne.com/archives/8749 Sat, 15 Feb 2025 08:47:11 +0000 https://www.felienne.com/?p=8749

Well, this week was… something else. Last week I didn’t put together a proper news overview because I went skiing, and while I was up in the mountains I’d occasionally check the news on BlueSky – and it wasn’t looking very happy. Elon Musk claimed all sorts of data and systems for himself, all development aid was put on hold, and research funding was decimated. I can’t just take a week off for vacation when everything’s falling apart! But anyway, on to this week’s AI news.

Boing Boing:
Boing Boing reports this week on a 2023 study showing that CAPTCHAs don’t work at all. I wanted to include it here, but it turned into a full-blown story, so I wrote a longer piece on it here (in Dutch).

The Verge:
Scarlett Johansson has made a strong call to the U.S. government to crack down on deepfakes after a video of her and other celebrities went viral. It’s unlikely anything will change overnight, but it’s great to see her continuously weighing in with her perspective!

Nature:
I also came across a fascinating paper in Nature this week that shows real-world math skills (like doing calculations at a market) don’t automatically transfer to academic math in the classroom – and vice versa! Not directly related to AI, but it is in a way, because anyone who thinks that chatting with AI will actually teach young people valuable skills is missing the point about the “transfer” of learning.

There’s also a another research study by my former colleague Advait Sarkar (among others) that will soon appear at CHI. Their research, involving over 300 knowledge workers, shows that using AI doesn’t stimulate critical thinking – it actually hinders it! In fact, heavy reliance on AI might even lower your problem-solving skills.

Wired:
Wired remains a beacon of light in these days of relentless tech coup stories from the Trump era, but this week I particularly enjoyed a light, thought-provoking piece on AI and dating apps. It may not have a lot of heft, but it does raise some interesting questions: is it okay to use AI to chat with a potential partner? And how do you handle those first dates “without” it?

Economist:
Online scams are, as expected, spiraling out of control – and they might already be as big a revenue source as online drug sales. This long article does a great job explaining how even the scammers themselves are often victims from low-wage countries.

NRC:
You may have seen it on LinkedIn already, but I also had my say with an op-ed in NRC about the (im)possibility of politically neutral AI! (in Dutch)

Good News, by Popular Request:
Here’s some uplifting news! English science museums have scanned and digitized their entire collections – that’s 500,000 objects! In an extensive report, they detail how they tackled the project. Or go ahead and explore the collection yourself – check out the stunning difference engine by Babbage or the Jacquard loom, for instance.

And finally:
In the U.S., it’s been decided that images created with AI—even those generated with complex prompts—cannot be protected by copyright (although they do note that this might change in the future).

]]>
8749
Week 6’s AI News https://www.felienne.com/archives/8900 Fri, 07 Feb 2025 16:59:51 +0000 https://www.felienne.com/?p=8900

Ok, full disclosure—I was on vacation this week! But now that I’ve got a few hundred subscribers, I obviously need to make sure there’s some content, right? So before I left, I set aside some timeless pieces about AI.

Algorithmic Agnotology

I came across the term “algorithmic agnotology” on BlueSky and had to look it up. Agnotology is the study of ignorance, and there’s plenty to learn about that. In this extensive interview, Alondra Nelson—former head of the White House’s Office of Science and Technology Policy and now a professor at the Institute for Advanced Study (in Princeton, where our very own Dijkgraaf led for 10 years!)—explores the topic. The entire interview is well worth your time, but this term really stuck with me: the idea that ignorance can be used as a tool in the AI discourse. For instance, creators can say, “Yeah, we don’t understand the algorithm either!” You saw that play out during the benefits scandal. But not understanding something doesn’t absolve you of responsibility!

It made me think of when I recently tried to change a hotel booking, and the guy on the phone said it simply couldn’t be done. Not that he didn’t want to help, but “Computer says no.” That’s another form of ignorance—a “I don’t know how to do this in the system” excuse. You see what you lose when you move from a simple reservation system to a more complex one: bypassed rules, over-smoothing, and making things impossible simply because the system can’t handle them. I even wrote something similar about this recently in Elsevier!

NPR:

A great piece (also available as a podcast) on how the internet is gradually disappearing. I often tell my students that once you put something on the internet, you can’t just take it back. That’s certainly true for social media, but with many personal or even news websites it’s different—40% of the websites that existed in 2013 are now gone! So whole chunks of history will eventually just… vanish! (This was also the topic of my final BNR column for the end of 2024.)

The Verge:

And then there’s this delightful long read on how Apple has long failed to bring an AR headset to market. I’ll admit, it’s a prime example of rubbing one’s hands together and saying “I told you so.” Nobody wants that junk—neither with Google Glass, nor now.

Oh, and as a parting note… Watch this 15-minute video by Stanford psychiatry professor Anne Lembke, and you’ll feel like throwing your phone into the sea. I’m going to do the same—I’ll officially switch to a “dumb phone” soon. More on that later!

]]>
8900
Week 5’s AI News https://www.felienne.com/archives/8731 Sat, 01 Feb 2025 10:13:28 +0000 https://www.felienne.com/?p=8731

Here we are again! New readers, welcome!! There are already quite a few of you—so cool! Please do reply to this email; the downside of a newsletter compared to social media is that I hardly ever hear back (but hey, no hate messages either, which is nice…).

404 Media:
It was, of course, DeepSeek week, but these five bullet points from Gary Marcus are really all you need to know about it. And then there’s this piece from 404 Media—the headline is satirical, but the article itself is as sharp and well-crafted as 404 Media always is! Founder Samantha Cole even guested on Mystery AI Hype Theater this summer—definitely an episode worth checking out.

Teen Vogue:
Since Trump’s first term, Teen Vogue has been a beacon in the darkness—a publication offering solid background pieces and left-leaning political analysis, proving that there’s more to life than “superficial girl/women entertainment”; the world is far more complex. In one excellent article, a Black woman recounts how she received hardly any responses on LinkedIn when presenting her true identity, but when she passed as white, she suddenly had much more success. A tale as old as time, of course, but it remains painfully relevant, especially with the rollback of DEI initiatives on both sides of the political spectrum. And by the way, this is another example of an AI application that makes me think: yes, I can see that! If I could present myself as a man in the digital world… maybe I would!

Byline Times:
A strong, historical overview of Russian disinformation on Twitter (now X) reads like a Tom Clancy thriller! The trajectory from Brexit to Trump gives you goosebumps, and it clearly shows that the anger of “the people” isn’t innate but is systematically fanned by those spinning their own agendas. The piece also reminds us of the Cambridge Analytica scandal—something that seems to have been forgotten by everyone and their grandmother! In my naïveté back then, I thought: this is the end of Facebook; now everyone will finally leave it, but alas.

Guardian:
The AI pilots in the UK aren’t going very well. In my experience, pilots in large organisations are really just the starting points for projects that people don’t really want, yet they’re framed as pilots to defuse critics by saying, “it’s only a pilot.” So I find it encouraging to see that pilots which aren’t working are being scrapped—even though it also shows just how badly things are going!

Bloomberg:
The European Commission is inviting major tech companies—consistent with the Digital Markets Act that requires online platforms to filter out “socially harmful content”—to undergo a stress test. I talked about this in my BNR column this week.

Dan Meyer on SubStack:
Did you see those astounding AI results in Nigerian education earlier this week? There’s certainly a lot to be said about that.

And now, a little POSITIVE NEWS to wrap things up!
It’s nice to have some good news every now and then!

Guardian:
In 2024, 11% of all electricity in the EU was generated by solar panels, compared to 10% from coal. Gas accounted for 16%, and that share has been declining for the fifth year in a row. Perhaps we’ll still make it in time…? One can dream!

New York Times:
Bookshop, an online platform that enables small bookstores to sell physical books to compete with Amazon, will soon also offer ebooks! As a not-so-proud (actually, rather embarrassed) owner of a Kindle, I’m really going to try to switch over to Bookshop as much as possible! Tucked away in the article is the fact that many more paper books are still being sold than ebooks, which is great! There’s nothing quite like a real book—with bookmarks, scribbles, dog-eared pages, and all that. When I got back a completely battered book from a student last year, along with a heap of apologies, my heart skipped a beat! What could be better than a student who literally devours a book?

This article was translated by Johanna Guacide.

]]>
8731
Week 4’s AI news https://www.felienne.com/archives/8725 https://www.felienne.com/archives/8725#comments Fri, 24 Jan 2025 13:16:20 +0000 https://www.felienne.com/?p=8725

It was another wild week of AI craziness!

The Guardian:
I’ve written before about all the madness in the UK, and this week they’ve launched something new over there —a tool powered by AI that helps Cabinet Ministers understand how people might react to their policies. And naturally, I can’t help but ask: do you really need AI for that? Wouldn’t it be better to just go out into your constituency and talk to people? That way, you immediately make people feel seen—and that’s exactly what you’d want as a politician (I hope so…).

Follow the Money:
Always delivering top-notch investigative journalism, -which is exactly what we’re going to need more of in the coming years. For instance, there’s this piece about an algorithm called Preselect Recidive that the police use to predict whether young people will slip up again! It’s an extremely disturbing article—one can easily imagine which groups would be affected the most. It’s like a Dutch version of Minority Report.

The Financieel Dagblad:
This week I even made a brief appearance in the Dutch economic and financial daily news outlet FD (Het Financieele Dagblad), with a delightful headline of mine: Programming Is More Than Just Typing Code (in Dutch). Zuckerberg’s idea to fire mid-level programmers makes no sense—good software requires thought, a consistent plan, and coordination. Moreover, it seems like nothing more than a diversionary tactic to silence those who criticize the anti-DEI measures, effectively muting their so-called “masculine energy.”

Bloomberg:
A long read on the influence of YouTube on Trump’s popularity offered a truly in-depth data analysis of 2,000 videos—totaling roughly 1,300 hours. What’s interesting about the analysis is that many top podcasters do talk about politics, yet they explicitly claim to be apolitical. They cover topics like sports betting, the gym, and meme culture, casually weaving in content about Trump that fits perfectly with the “locker room tough guy” vibe. It wasn’t until just before the election that most shows started interviewing explicitly political guests. They also target a male audience—only 12% of the guests in the analyzed shows are women, and according to the piece, these are also the people who voted for Trump (50% of men under 30, it claims). But what struck me most from the article wasn’t the data, but a quote from Mike Majlak: “The easiest route these days to viewership is by creating enemies” . These are men who understand the algorithm and know that it isn’t quality, but anger, that gives a show its cachet.

Club de Madrid:
Eighteen former European leaders are calling on von der Leyen to take on Google and dismantle the AdTech sector. I didn’t really know the term “AdTech” until that excellent episode of Mystery Hype Theater 3000 last summer (the entire podcast is a must-listen!). What really stuck with me from that episode is that Google has very subtly shifted its goal—from “Hey, here’s a website where you might find what you’re looking for” to “Here’s the answer to your question,” which implies an entirely different kind of inquiry.

Nature:
Research involving almost 1,500 people published in Nature shows that working with AI changes your judgment and actually amplifies the biases already present in people. In fact, the effect of AI on people is greater than the effect of other people. The researchers write about “… a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgment escalate into much larger ones”. This is a form of moral deskilling that Evgeny Morozov writes about in his book To Save Everything, Click Here. If you no longer have to think about what is correct (for instance, if you can no longer sneak past the metro turnstiles), eventually you’ll stop doing it altogether. It’s good that there’s comprehensive research confirming this—even if it’s a shame that such studies are needed when the outcome is so predictable.

NBC News:
An AI system designed to detect weapons on school premises (a dystopian idea in itself) didn’t work well in a recent school shooting because the shooter wasn’t properly in view of the cameras. This is a prime example of a technical fix for a problem that could just as easily be solved through legislation or standards (as is the case in the Netherlands).

A Few Minor Notes:
I’ll finish with a somewhat hopeful message: we’re hearing louder calls for more decentralized social media, for instance on 404 Media. It’s a bit unfortunate that BlueSky turns out to be a nicer alternative to Twitter than Mastodon—since Mastodon is arguably more democratic—but it’s something. Oh, and a heads-up! Microsoft might soon use all your texts in Word to train its AI. Don’t want that? Here’s how you can disable it.

One more thing, for comic relief… You can now create mind maps with ChatGPT. There’s plenty of excitement on social media about how it saves hours of studying and simplifies everything!!! But if that’s the case, then you really haven’t grasped the purpose of a mind map—the true value lies in processing complex information yourself.

This post was translated by Johanna Guacide.

]]>
https://www.felienne.com/archives/8725/feed 1 8725
Week 3’s AI news https://www.felienne.com/archives/8678 Mon, 20 Jan 2025 12:38:50 +0000 https://www.felienne.com/?p=8678

I’m trying something new again! Instead of endlessly bookmarking articles because I think “I’ll come back to this later,” from now on I’ll start a blog post each week and update it throughout the week. That way, by the end of the week I’ll have a nice overview, and I’ll hopefully be able to easily find things without having to rely on big tech’s search (even a decent, small-tech tool like Pocket isn’t very searchable once you’ve accumulated 15 years’ worth of content). Let’s see how this goes!

TechCrunch:
OpenAI has quietly removed references to “politically neutral” AI from its policy documents. A striking twist—until now, OpenAI has consistently stressed its commitment to AI “alignment,” meaning AI that is beneficial for humanity (which, by definition, isn’t neutral!). Free speech is the prevailing sentiment in Silicon these days (just look at Mark Zuckerberg and Peter Thiel). In public, OpenAI is mainly campaigning to be allowed to collect more data without the usual hassles over copyright and the like.

New York Times:
Very different in tone but with a similar vibe is the much-criticized interview with tech investor Marc Andreessen. According to him, the leaders of big tech originally just wanted to be “good people”—and all the progressive moves they’ve made recently (like supporting same-sex marriage) were merely because they cared about being seen as virtuous by their peers; they didn’t truly believe in it (no worries, Trump & friends!). But when Biden (in his view) intervened too forcefully—possibly even hinting at AI regulation—it went too far. Suddenly, they had a change of heart and decided they should actually become Republicans (and apparently he even thought Hillary Clinton was Biden’s predecessor… a mistake the Times quickly corrected). It’s especially interesting to see how the “left” in the US was, in reality, very neoliberal and pro-business—and was widely seen that way. It makes you wonder how different the world might have been if we’d been truly left-wing (also here in the Netherlands). In a well-researched background piece, tech critic Brian Merchant explains how the Democrats essentially helped prop up the tech giants.

Futurism:
Tech startup School.AI has created a chatbot that lets you chat with Anne Frank. This isn’t just a fascinating technological development—it’s also a prime example of the deskilling I’ve always been wary of. Truly understanding what the Holocaust was is hard work, and it shouldn’t be simplified. Yet a chatbot can give the misleading impression that these profound subjects are just bite-sized chunks. For example, you could ask the chatbot whose fault Anne’s death was, and it would respond with some vague remark about how you really can’t pin the blame on anyone (see the screenshots here).

But truly grasping the Holocaust is challenging—and it’s a lifelong endeavor. This summer, I read about Hitler’s attire, which shifted my perspective, and just this week I watched the film A Real Pain (highly recommended), which again changed the way I think. Understanding such a vast and complex history takes time, effort, and a willingness to consider different angles. Was it also the fault of those who worked in the camps? Of those who betrayed Jews? Even of those who didn’t help? Some questions have no clear answers, and pretending otherwise is a problem in itself.

This post was translated by Johanna Guacide.

]]>
8678
What does it mean for a university to have an opinion? https://www.felienne.com/archives/8575 Mon, 13 Jan 2025 20:21:22 +0000 https://www.felienne.com/?p=8575

In this thorough piece, The American Prospect explains that while some people might hope that universities will save democracy, but that that might be tricky since they themselves are not at all democratic.

This piece reminded me of a lecture I gave right before the Christmas break for students about AI & education, in which I casually said that I am an anarchist (I will explain the context of that one in a later post!) One of the students got kind a angry about this and said: “what does that mean, do you want the world to have rules?”

I must admit I was caught by surprise by this question but it was a great one cause it made me think (maybe I assumed that students would also be or at least sympathize with anarchists?) so my anwser might have been a bit half-assed, but I have thought about it a bit more since and I think I can formulate it pretty well now.

VU, my university, is capable of having an opinion, for example on Ukraine, fossil fuel or gender equality. so how do those opinions come to be? Sometimes they come from the board of the university, lower people with power (deans, heads of big institutes), sometimes from even higher levels of power (all unis together), sometimes from special working groups, but as an employee I dod not choose any of those people in power. So that is not democratic at all, which I find problematic, because I do have to live with decisions of the VU either because people ask me about it, or because they affect my working life.

We could have elections for deans, rectores, department heads etcetera, which would be more democratic, but, for me, not enough since a lot of unexpected things might happen in a few years (like the war in Ukraine or Gaza) on which opinions needs to be formulated, and contrary to political parties that often have a history of voting and clear set norms, if I vote for a candidate un the uni context, how do I know they actually hold my opinion often enough?

So why not have all people of the university, professors, non-scientific staff, students deliberate together, without power structures? You can imagine all sorts of systems, a random selection of people each time (like jury duty), digital systems in which larger groups people discuss or vote etc. If I say I am an anarchist, I mean that I want universities (and ultimately: society) to make decisions in a non-hierachical way, independent of systems of power. And if that feels weird, scary and chaotic, I can only say: is our current system not weird, scary and chaotic?

]]>
8575
Charlie Chaplin and the death of the internet https://www.felienne.com/archives/8542 Mon, 06 Jan 2025 08:30:24 +0000 https://www.felienne.com/?p=8542

We had a teenager over for New Year’s Eve, and one of his biggest hobby is to explain to me and my husband (“boomers” as he calls us even though we are millenials) what terminally online kids do these days; which words and memes and emoji are still in use. And this is how my final conversation of the year 2024 with came to be about the distracted boyfriend meme (which the teenager finds totally boomer).

By Antonio Guillem – Wired, Fair use

I remembered then, that I had read on the internet a while ago (turned out to be 2018 haha, when the teenager was 10) that there is a Charlie Chaplin version of this meme (watch the whole film Pay Day that this is from on YouTube)

It is of course not said that the mem creators were ripping off Charlie Chaplin here, since people on Twitter came up with several older paintings and even a tapestry with similar images. But what struck me was what I said next, without even really thinking about it.

I said “I am happy I saw that meme before AI, because now I wouldn’t be sure of it was real”. Even if I could have found the whole movie the still comes from, that too would have been very easy to create with AI nowadays, and it would have costs me a lot of time to dive in. I am pretty sure that in 2018 I did not give a second thought to it, I just saw the image and could realistically assume it to be real.

I can’t bear to think of the extra work that we now all to carry out when sharing picture or video or audio, or the fact that people might refrain from sharing funny things for fear of fake stuff.

Ow the internet we have lost!!

https://www.snopes.com/fact-check/distracted-boyfriend-meme-come-real-movie
]]>
8542