kivikakk.ee

on llms (Ⅲ)

The Monastery at the End of the World: into the woods—AI is not therapy:

Machines will destroy the very idea of therapy. Who would pay to have someone sit with them in uncomfortable silence, when you can snuggle under your warm doona with a voice that is always there, always comforting, always making you feel better? And that is so important, because you always need someone. A person who will listen and talk, like the machine. Because you’re still suffering. Every day, the stress, the anxiety, the depression, they just keep circling back. As you get more and more comfort from better and better machines, your short-term buzz masks a deeper and deeper descent into madness.

Over the years, your machine is always there. It whispers its words of wisdom, and you, tired and sad and alone, take those words into your heart until they become your own. Little by little, your mind is colonized by the unthoughts of a machine, until your own mind is constantly echoing and repeating the unthought, lost under so many layers of time and memory that you cannot even begin to distinguish what is yours and what is the machine’s.

fiona fokus: I don’t care how well your “AI” works:

In a world where fascists redefine truth, where surveillance capitalist companies, more powerful than democratically elected leaders, exert control over our desires, do we really want their machines to become part of our thought process? To share our most intimate thoughts and connections with them?

AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists. Enormous physical infrastructure designed to convert capital into power, and back into capital. Those who control the infrastructure, control the people subject to it.

AI systems being egregiously resource intensive is not a side effect — it’s the point.

Craft, expression and skilled labor is what produces value, and that gives us control over ourselves. In order to further centralize power, craft and expression need to be destroyed. And they sure are trying.

Wesley’s Notebook: The “Resonant Computing Manifesto” is not very good:

I am once again asking for a single example of what the fuck you are talking about. People who feel like LLMs produce output that is “responding fluidly to the context and particularity of each human” are living in a entirely different world than I am.

Scott Jenson: The Timmy Trap:

When I speak on this topic, I bring out a standard yellow pencil with googly eyes stuck near the eraser end and a pipe cleaner wrapped around it for arms. I call him Timmy and, animating him like a puppet, have him say “hello” to the audience. Of course, they all say hello back. Timmy then describes how much he likes to draw with children and make them laugh. I ask what he wants to be when he grows up and he says, “To be A UX designer, just like you.”

I reply, “Aww, that’s really too bad, Timmy.”

Then, I hold him up horizontally in front of my face and abruptly snap him in half.

The audience gasps.

It’s a shocking moment, and I’ve been told by many, it’s the most memorable part of the talk. The reason is simple: they felt a connection to Timmy. They had known him for only 15 seconds, yet they still perceived the act of snapping him in half as violent.

That’s why LLMs can fool us so easily. If we can form a human connection with a pencil in just 15 seconds, imagine how we’ll feel about an “AI system” we spend an hour with. We want them to be human. This is why we call their frequent mistakes “hallucinations,” a term that implies a temporary lapse. But it’s not a lapse; it’s a fundamental lack of human cognition.

Miriam Eric Suzanne: Tech continues to be political (And the politics aren’t looking great):

Haven’t you heard? They’re building a digital god who will lead us to salvation, uploaded into the virgo supercluster where we can expand the light of exponential profit throughout the cosmos! This is the actual narrative of several AI CEOs, despite being easy to dismiss as hyperbolic nonsense. Why won’t I focus on the actual use-cases?

Why won’t you focus on the actual documented harms? Somehow there is always room for people to dismiss concerns as “overblown and unfounded” past the first attempted coup, and well into an authoritarian power grab.

But the bigger issue is that they don’t have to be successful to be dangerous. Because along the way, these companies get to steal our work and sell it back to us, lower our wages, de-skill our field, bury us in slop, and mire us in algorithmic bureaucracy. If the long-term space god thing doesn’t work out, at least they can make a profit in the short-term.

The beliefs of these CEOs aren’t incidental to the AI product they’re selling us. These are not tools designed for us to benefit from, but tools designed to exploit us. To poison our access to jobs, and our access to information at the same time.

I said on social media that people believe what chatbots tell them, and I was laughed at. No one would trust a chatbot, silly! That same day, several different friends and colleagues quoted the output of an ‘AI’ to me in unrelated situations, as though quoting reliable facts.

So now a select few companies run by billionaires control much of the information that people see – “summarized” without sources. Meanwhile, there’s an oligarchy taking power in the US. Meanwhile, Grok’s entire purpose is to be ‘anti-woke’ and anti-trans, ChatGPT’s political views are shifting right, and Anthropic is partnering with Palantir.

Seems chill. I bet ‘agents’ are cool.

Wouldn’t want to eat a shrimp cocktail in the rain.

Addison Crump: Crashing Out:

It’s impressive that programmers can now “write more code” “more quickly”. It’s largely debated whether that’s actually desirable, or even the case at all. Frankly, in my eyes, the problem is not that we do not have enough code; the problem is that we’ve already written so much with so little regard for its quality or purpose. As we are more able to produce code quickly with so little care, we will rely less on solid, community-developed solutions, turning to easy, barely “working” ones.

Mechanical Survival: Team dynamics after AI:

Whilst this failure is a little disappointing, it’s interesting to observe that this doesn’t work. It’s signal. We can now say: we live in a world, sorry, an “emerging reality”, where we can have [waves hand] anything at any time to a basic level of competence, and all of the real problems, the “people problems” still exist. And this is finally proof positive, hard evidence, that artefacts are not really what we’re here for. We are demonstrably optimising about a mile from the bottleneck. AI may be giving you a million prototypes, but if you listen, AI is telling you in quite a painful way that being able to get feedback on your artefacts is much, much more important than the artefacts themselves.

For me, the information environment at work resembles nothing so much as the attention economy we all know and hate from outside work, where content jostles for our eyeballs. The drivers when I open my phone are dopamine, social validation. But at work my attention tends to be directed by anxiety, time pressure, and purpose (if I can get it). And there’s one more difference, which is that at work, there are potentially quite serious consequences for missing a beat.

So just as the social algorithms are feeding me content that’s in the neighbourhood of content I seem to like, at work I’m operating my own algorithm, choosing what to permit myself to become invested in. This is how I spend my one wild and precious life. And when I choose to attend I’m dipping my cup into a stream that is very fast-flowing and very deep.

When AI offers us the means to deal with “too much information”, the proposition must be that it will automate that internal “algorithm” that decides what we attend to. Like a mother bird partially digests a worm before regurgitating it into her babies’ mouths, AI can hunt down the valuable information, partially digest it, and regurgitate it into me.

These summaries are automated sense-making, relentlessly compressing the uncompressable.

Somehow people are surprised that getting the AI to be accurate is difficult. But it’s not a technology problem, it’s a philosophy problem. You are trying to satisfy billion-dimensional, ever-changing, utterly subjective, embodied “truth”. And this thing will also supposedly give us all the unbiased view from nowhere. And we will achieve this, supposedly, through “guardrails”? But that just doesn’t make sense.

Whether AI is compressing or expanding “context”, it clutters the team’s information environment. When it summarises it misses, when it expands it invents. The meaning of things that are in the process of naming, perhaps that are in the process of being pulled apart for naming, must be continually negotiated between us as people.

There is literally no other way to make sense of things.

< newer post
Something has gone wrong
older post >
cluster overview