Queer Gen Z and the Copy Machine: While We Were Scrolling, AI Moved On Without Us

AI agents are rewriting the rules of knowledge work. Most queer Europeans have not been told the rules have changed.

Queer Gen Z and the AI
Illustration: Esme Blegvad/The Guardian

This is not a pop-up.

You can simply scroll past—but please don't overlook the importance of an independent queer press.

Time is now or never.
Queer voices disappear without independent journalism to amplify them.
We document what others won't touch.
We hold power to account when it threatens our communities.
This work exists only because you choose to fund it directly.

Tote Bag Donate over €25/month and receive our limited-edition tote bag — a badge of resistance, a statement that you stand for fearless journalism.

We are grateful!

Can't donate? Sharing our work helps more than you think

This is not a pop-up.

You can simply scroll past — but please don’t overlook the importance of an independent queer press.

You can simply scroll past—but please don't overlook the importance of an independent queer press.

Time is now or never.
Queer voices disappear without independent journalism to amplify them.
We document what others won't touch.
We hold power to account when it threatens our communities.
This work exists only because you choose to fund it directly.

Tote Bag Donate over €25/month and receive our limited-edition tote bag — a badge of resistance, a statement that you stand for fearless journalism.

We are grateful!

Can't donate? Sharing our work helps more than you think

Monthly donation Recurring monthly charge

Secured by Stripe • Your payment information is encrypted

The photocopier had been flashing a paper-jam warning for three minutes when the student — twenty-two, a master’s candidate in cultural studies at a Viennese university, someone who could navigate four social-media platforms simultaneously and edit a TikTok video in the time it takes to boil an egg — turned to me with an expression that can only be described as architectural defeat. The expression of a person confronting a structure whose logic is not merely unfamiliar but actively hostile. I walked over, opened the side panel, removed a crumpled sheet of A4, closed the panel, and pressed the green button. The machine resumed. The student watched the pages emerge with something close to wonder.

This is not a story about photocopiers, though photocopiers are part of it. It is a story about what happens when an entire community — queer people, broadly, but also the young, the economically precarious, the digitally confident who are not digitally literate — discovers that the technological ground beneath them has shifted so profoundly that the gap between those who noticed and those who didn’t may prove unclosable.

In the spring of 2025, the technology sector declared it the Year of the AI Agent. The term refers not to the chatbots most people had grown accustomed to — the ChatGPTs, the Google AI Overviews, the automated slop clogging social-media feeds — but to something categorically different: autonomous software systems capable of reasoning, planning, using external tools, and executing multi-step tasks with minimal human oversight. IBM surveyed a thousand developers building enterprise AI applications; ninety-nine per cent reported they were exploring or developing AI agents. Anthropic released its Model Context Protocol in late 2024, giving large language models the ability to interact with external software in a standardised way. Google followed in April 2025 with its Agent2Agent protocol, addressing how agents communicate with each other. By mid-year, agentic browsers — Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas — had begun to reframe the browser itself as an active participant in tasks rather than a passive window. Deloitte predicted that twenty-five per cent of companies using generative AI would launch agentic pilots by the end of the year, rising to fifty per cent by 2027.

For the tech-literate hobbyist — the kind of person who runs a local language model on a home server, who subscribes to three AI newsletters, who has configured a workflow builder to automate their invoicing — this was electric. Months of work collapsed into weeks. Weeks collapsed into afternoons. A single agent, properly configured, could research, draft, edit, format and publish a document while its operator went for a walk along the Donaukanal.

For nearly everyone else, AI still meant the chatbot that sometimes got things wrong.

This bifurcation — between those who grasped the agentic turn and those who remained on the consumer side of the divide — runs through every demographic, every profession, every generation. But it cuts with particular cruelty through queer communities, for reasons that are structural, economic and maddeningly self-reinforcing.

Consider the data, because the data is damning. A comprehensive survey conducted by LGBT Tech in partnership with Data for Progress — polling 1,304 LGBTQ+ adults across the United States — found that while digital access is near-universal (ninety-five per cent go online daily), the depth and sophistication of that access is perilously thin. Sixty-eight per cent of LGBTQ+ respondents reported experiencing online harassment. Seventy-one per cent expressed concern about AI bias in content moderation. Sixty-eight per cent worried about algorithmic discrimination. Seventy-three per cent flagged misinformation as a primary concern. These are not the responses of a community exploring what AI can do for them. These are the responses of a community bracing for what AI might do to them.

The economic dimension sharpens the picture. A quarter of LGBTQ+ Americans earn less than $24,000 annually. LGBTQ+ individuals rely on public libraries for internet access at more than twice the rate of the broader population — sixty-seven per cent have used a library for connectivity, compared with less than a quarter of Americans overall. Among transgender respondents, that figure rises to eighty-two per cent. These are not people who are going to purchase a subscription to a premium AI coding assistant. They are not going to spend an evening configuring an n8n workflow. The tools that are reshaping the knowledge economy are, quite simply, not in the room with them.

And here is where the generational myth enters, the comfortable fiction that the young, at least, will figure it out. They will not — or rather, they will not figure out what needs figuring. A Salesforce survey found that only thirty-two per cent of Generation Z feel adequately equipped for essential workplace digital skills. For advanced competencies — coding, data encryption, cybersecurity, AI — the numbers crater further: twenty per cent feel confident in coding, eighteen per cent in cybersecurity, seven per cent in AI. Seven per cent. This is a generation that grew up inside screens but never learned what the screens are made of. They can post but not program. They can scroll but not script. They can consume AI-generated content without the faintest understanding of how it was generated, much less how to direct the generation themselves.

I teach critical thinking to master’s students. I can confirm this empirically. The photocopier incident was not an anomaly. I routinely help students with tasks that would have been considered baseline digital literacy a decade ago — navigating a file system, understanding the difference between a local and a cloud-stored document, exporting a PDF. These are bright, curious, intellectually ambitious people. They are not stupid. They are simply the products of a consumer technology ecosystem that was designed, quite deliberately, to make the workings of technology invisible. The smartphone is a sealed box. The app is a walled garden. The user is a guest who is never shown the kitchen.

The European picture sharpens this further, and unevenly. Eurostat’s 2023 data — the most comprehensive snapshot available — found that only fifty-six per cent of EU citizens aged sixteen to seventy-four possessed at least basic digital skills, against a Digital Decade target of eighty per cent by 2030. The European Commission’s own assessment concluded that, without significant intervention, the EU would reach barely sixty per cent by the decade’s end. But the continental average obscures disparities so vast they might as well describe different centuries. The Netherlands leads at eighty-three per cent; Finland follows at eighty-two. Then the floor drops: Poland sits at forty-four per cent, Bulgaria at thirty-six, Romania at twenty-eight. Austria — where I teach, where this essay is being written — hovers around sixty-three per cent, comfortably above the EU average but still a country where more than a third of the adult population lacks the skills Eurostat defines as basic. And these are basic skills: information literacy, online communication, elementary content creation. Not AI literacy. Not the capacity to configure an autonomous agent. The EU’s 2025 enterprise data tells the other half of the story: only twenty per cent of European businesses with ten or more employees used AI technologies at all. Denmark led at forty-two per cent; Romania trailed at five. Among the “core” Western economies — Austria, France, Belgium — performance is typically mid-range, noticeably behind the digital leaders. The consistent frontrunners remain the Nordics and the Low Countries: Denmark, Finland, Sweden and Netherlands. Britain’s digital skills landscape is mixed. While large swaths of its workforce use AI tools in daily life — with surveys showing around 70 per cent of people having used AI personally — workplace adoption and skills depth lag behind global peers.

The gap between a Danish SME deploying text-mining agents and a Romanian micro-enterprise still navigating spreadsheet software is not merely a gap. It is a chasm with a language barrier, an infrastructure deficit and a funding shortfall at the bottom of it.

What this means, concretely, is that the agentic revolution is arriving into a Europe that is not merely unprepared but fractured in its unpreparedness. The Nordic states — Denmark, Finland, Sweden — are pulling away, their enterprises adopting AI at rates that doubled year on year between 2024 and 2025, their populations already literate enough to engage meaningfully with what the technology offers. Southern and Eastern Europe, by contrast, face a compounding crisis: populations with lower baseline digital skills, enterprises with lower AI adoption, and public institutions with fewer resources to close either gap. Germany, for all its industrial heft, recorded enterprise AI adoption below the EU average in 2024. France performs unevenly. Spain and Italy sit in the broad middle, neither leading nor catastrophically behind, but drifting. For queer Europeans specifically — disproportionately concentrated in urban centres but by no means exclusively so, disproportionately young but by no means universally tech-literate — the geography of this divide is not abstract. It determines whether the LGBTQ+ organisation in Helsinki can automate its grant pipeline while the one in Bucharest still struggles with reliable broadband.

Now place this generational reality inside the queer community and watch the compounding begin. LGBTQ+ people are disproportionately young: in many Western countries, younger cohorts identify as queer at significantly higher rates than older ones. They are disproportionately economically precarious. They are, as UNESCO’s work on the AI divide has documented, among the marginalised communities that bear the brunt of unequal access to AI technology and its benefits. They are more likely to rely on digital platforms for healthcare information, community connection and safety planning — and less likely to have the technical literacy to evaluate whether the AI systems mediating that information are trustworthy.

The result is a community that is simultaneously more dependent on technology and less equipped to control it. A community living, as the source material puts it with arresting precision, in parallel AI universes. In one universe, a queer technologist uses an AI agent to automate grant applications for an LGBTQ+ non-profit, compressing a month of work into a day. In the other, a queer teenager asks ChatGPT a question about hormone therapy and receives an answer that is vague, hedged, algorithmically cautious and potentially misleading — and has no means of knowing this.

The danger is not that AI is hostile to queer people, though it sometimes is, in the banal way that systems trained on majority data reproduce majority assumptions. The danger is subtler and more pervasive: it is the danger of irrelevance. Of the community being shaped by a technology it does not understand, cannot direct, and increasingly cannot afford to learn. The agentic revolution is not arriving in the future. It arrived in 2025. The question is no longer whether AI agents will transform labour, creativity and information — they are transforming these things now, this week, today. The question is who will be in a position to direct that transformation and who will be subject to it.

The answer, at present, is stratified along depressingly familiar lines: wealth, education, access, technical confidence. And within the LGBTQ+ community, where all four of those factors tend to be compromised, the stratification is not merely replicated but amplified. A community that fought for decades to be seen, to be counted, to be present in the systems that govern public life, now faces the prospect of a new system — more powerful, more consequential, more opaque than any that preceded it — in which it is, once again, absent from the design process.

Back in Vienna, the photocopier has been replaced by a newer model. It jams less often. My students still look at it with suspicion, though they are beginning, cautiously, to look at AI tools with something more like curiosity. In one recent seminar, I showed them what an AI agent could do — not a chatbot, but an agent, something that plans and acts and iterates. The silence that followed was not the silence of incomprehension. It was the silence of recognition. The recognition that the world had moved, was still moving, and that they had not been told.

That silence is the sound of the digital divide. 

The question is not whether this technology could sharpen democratic awareness — it plainly could. The question is who gets to use it that way. Because right now, the people who most need a tool that remembers — communities whose histories have been erased, edited, pathologised, left out of the syllabus entirely — are precisely the people least likely to know the tool exists, let alone how to make it work for them. A queer teenager in Budapest, Vienna or Bucharest, growing up in a country where the state would prefer that certain histories remain untaught, could ask an AI agent to reconstruct what was removed. They could ask it to find the activists, the organisations, the legal battles that preceded them. They could, for the first time, build their own curriculum. But only if someone shows them how. Only if the device in their pocket becomes something more than a feed to scroll through. The technology is not the problem. The gap between what it can do and who knows it can do it — that is the problem. And it is the same gap, wearing different clothes, that the classroom was built to close.

In an era when algorithmic feeds elevate charisma over competence and electoral campaigns can be waged in sixty-second clips, schooling becomes the quiet infrastructure of democracy. It fosters citizens who scrutinise budgets, question slogans and vote for programmes rather than personalities — who choose political arguments over TikTok theatrics. In societies still consolidating democratic norms, the syllabus is, in its modest way, a constitutional document.

Subscribe to our newsletter here and support independent journalism.
Join 12,000+ readers who receive it every Wednesday, with exclusive content. For just €5,99

✦✦✦

If you have a tip and wish to contact us securely, you can write to [email protected], our encrypted email address. We take the protection of our sources seriously and guarantee strict confidentiality.

✦✦✦

When we learn of a mistake, we acknowledge it with a correction. If you spot an error, please let us know.

✦✦✦

You can listen to our podcast  Queer News & Journalism on your favourite platform or go to our YouTube Channel @GAY45mag.

✦✦✦

Let us know what you think at [email protected].

✦✦✦

Submit a letter to the editor at [email protected].

✦✦✦

GAY45 uses AI with human guidance and editorial oversight. We deploy AI to analyse data, sift through large volumes of material for investigative reporting, assist in drafting headlines and summaries, and generate translations and audio versions of articles. Images created by AI for illustrative purposes are clearly labelled as such. We do not use AI to write articles. Journalists remain ultimately responsible for everything we publish.

✦✦✦

Support GAY45 button

We appreciate it. Thanks for reading.

Author

Did we mention we accept donations? Indeed, love.

If this story matters to you, help us tell the next one — donate what you can today.

Support GAY45
Follow on Feedly