And what I find fascinating is I see similar mimicking by my 5 year old. Perhaps we shouldn’t be so quick to call this a lack of being genuine. Sometimes emotions are learned in humans but we wouldn’t call them fake.
I don’t want to declare machines to have emotion outright, but to call mimicry evidence of falsehood is also itself false.
Mimicry is how kids learn the expected reactions to particular emotions. A kid mimicking your surprise doesn’t mean they are surprised (as surprise requires an existing expectation of an outcome they may not have the experience for), but when they do feel genuine surprise, they’ll know how to express it.
Because it's a statistical process generating one part of a word at a time. It probably isn't even generating "surprise". It might be generating "sur", then "prise" then "!"
But what is surprise really? Something not following expectation. The distribution may statistically leverage surprise as a concept via how it has seen surprise as a concept e.g. "interesting!"
So it can be both true that it has nothing to do with the emotion of surprise, but appear as the emulation of that emotion since the training data matches the concept of surprise (mismatch between expectation and event).
It’s the emotional and physiological response to a prediction being wrong. At its most primal, it’s the fear and surge of adrenaline when a predator or threat steps out from where you thought there was no threat. That’s not something most people will literally experience these days but even comedic surprise stems from that shock of subversion of expectation.
LLMs do not feel. They can express feeling, just as you can, but it doesn’t stem from a true source of feeling or sensation.
Expressing fake feelings is trivial for humans to do, and apparently for an LLM as well. I’m sure many autistic people or even anyone who’s been given a gift they didn’t like can relate to expressing feelings that they don’t actually feel, because expressing a feeling externally is not at all the same as actually feeling it. Instead it’s how we show our internal state to others, when we want to or can’t help it.
It is a mistake to equate artificial intelligence with sentience and humanity for moral reasons, if nothing else.
We are also technically a statistical process generating one part of a word at a time when we speak. Our neurons form the same kind of vectorised connections LLMs do. We are the product of repeated experiences - the same way training works.
Our brains are more advanced, and we may not experience the world the same way, but I think we have clearly created rudimentary digital consciousness.
Because it has no mind, no cognition, and nothing to "feel" with. Don't mistake programmatic mimicry for intention. That's just your own linguistic-forward primate cognition being fooled by the linguistic signals the training set and prompt are making the AI emit.
I could describe the electrical and chemical signals within your neurons and synapses as proof that you are merely a series of electrochemical reactions, and can only mimic genuine thought.
You could do that if you wanted to ignore reality and be reductive to score points in an argument by purposefully conflating mimicry with intention, yes.
And that is dogma. It's unthinking circular reasoning.
It wasn't very long ago that scientists were certain that animals did not posses thoughts or feelings. Any behaviour which appeared to resemble thinking or feeling was simply unconscious autonomic responses, with no more thought behind them than a sunflower turning towards the sun. Animals, by definition, lack Immortal Souls and Free Will, and therefore they are empty inside. Biological automata.
Of course this dogma was unfalsifiable, because any apparent evidence of animal cognition could be refuted as simply not being cognition, by definition.
Look, either cognition is magic, or it's math. There really isn't a middle ground. If you want to believe that wetware is fundamentally irreducible to math, then you believe it's magic. If that's want you want to believe, then fine. But it's dogma, and maintaining that dogma will require increasingly willful acts of blindness.
You are using word "math" in a magical way. Current LLM programs are reducible to math and human cognition is reducible to math (which is a reasonable hypothesis). What you are implying is that just because word math is used in both sentences it actually means the same thing. And that is a magical thinking. Just because human cognition is reducible to math (let's assume that for sake of discussion) doesn't mean it's the same math as in the LLM programs, or even close enough. Or maybe it is, but we don't have any proof yet.
I agree with this. I'm not arguing that LLMs are conscious. We don't understand the math behind how our brains work; we don't know how close or far LLMs are to that; and we don't know how many different pathways to consciousness there are within math.
All I'm saying is that the argument that "It's not consciousness, it's just <insert any tangentially mathematical claim here>", is dogma. Given everything that we don't know, agnosticism is the appropriate response.
> It wasn't very long ago that scientists were certain that animals did not posses thoughts or feelings. Any behaviour which appeared to resemble thinking or feeling was simply unconscious autonomic responses, with no more thought behind them than a sunflower turning towards the sun. Animals, by definition, lack Immortal Souls and Free Will, and therefore they are empty inside. Biological automata.
It's cool that you can decide to take half-remembered incorrect anecdotes about what "scientists" are certain of at some indeterminate time in the past, sans citation, and use that to underpin your argument about a totally different thing.
> Of course this dogma was unfalsifiable...
...like your post's anecdata.
> Look, either cognition is magic, or it's math.
Yes, when you decide to draw a convoluted imaginary bounding box around the argument, anything can be whatever you want it to be.
LLMs have no mind and no intention. They are programmed to mimic human language. Read some Grice and learn exactly how dependent humans are on the cooperative principle, and exactly how vulnerable we are to seeing intent where none exists in LLM communication that mimics the outputs our inputs expect to receive.
Your cries of "dogma dogma dogma" are unpersuasive and lack grounding in practical reality.
Because google search and llm teams are different, with different incentives. Search is the cash cow they keep squeezing for more cash at the expense of good quality since at least 2018, as revealed in court documents showing they did that on purpose to keep people searching more to have more ads and more revenue. Google AI embedded in search has the same goals, keep you clicking on ads… my guess would be Gemini doesn’t have any of the bad part of enshitification yet… but it will come. If you think hallucinations are bad now, just you wait until tech companies start tuning them up on purpose to get you to make more prompts so they can inject more ads!
I think I might enjoy the CPS scenario... let them call CPS, and wait for CPS to arrive, and then discuss with CPS who is endangering the child, the parent or the school. I'm pretty sure a judge will quickly decide whether their rule makes sense or not, and I think judges in child protection cases are going to quickly side with what's important for the child.
I HATE this kind of nonsense, and threatening you as a parent is only making things worse. Why not offer a way to handle this on a simple website? It would have lower cost to the school and be more accessible to anyone with any device able to access websites. Nonsense.
Well the judge will likely rule the app is bullshit, but in the meantime CPS will argue they need to go into your house, look to see if you have a dirty dish, or the wrong proportion of snacks to vegetables, or maybe take notice your child is playing independently outside while they come around. Then they will portray that in the most insane way possible, and since it is a civil and not criminal process their is no requirement anything is shown beyond a reasonable doubt.
There's also the problem that once they have your kid, the tables are completely turned, rather than them showing why they should take them, now you have to show why you should get them back and that is a process that can be dragged out for over a year.
Unfortunately CPS has wide latitude, secret courts, and the ability to unendingly fuck with you, so it's better just to not "invite" them in your life if you can. And if they do manage to snatch your kid, note they give so little fucks for the kid that their contractors will leave a kid in a hot car to die because apparently that's safer than being with their parents.[]
Damn. When I had a child in Germany, our version of CPS came over and told me what fun things the city offers for children and asked me about my plans for day care and how I can get help to get a spot.
I once called them because the day care lady of a friend‘s kid is a bit of an idiot and kinda scared us about mass closure of day care centers and it was probably the nicest interaction I’ve ever had with a government agency.
But from what I’ve heard, America in general is a whole other beast both regarding expectations for parents, trust in the kids and the trouble you can get in for minor things.
I wouldn't be so quick to equate differences in personal anecdotes with stark country-level differences (though it's plausible that everything is worse in America as usual)
I grew up in a low income neighborhood in the Netherlands and many times saw people be utterly terrified of CPS. In many cases these were households where outside help could've been really useful, but even in the worst cases where heavy CPS involvement was the only option (real "take the child away" cases), the child's situation often unfortunately hardly got better, just different. In less intense cases CPS involvement often just seemed to thrust a compliance burden on households without offering much real support, mostly just leaving people feeling guilty and stigmatized. Overall still better for them to exist than not, and budget cuts and restructuring really hurt the situation later, but still an organization with very real odds of making the situation worse, sometimes catastrophically worse.
I'm so sorry that's the situation in your country. Another answer to your message from Germany is pretty close to my experience in France, child protection is way less combative and genuinely invested in what's good for children.
Because once you figure out the correct way to handhold, you can automate it and the tediousness goes away.
It’s only tedious once per codebase or task, then you find the less tedious recipe and you’re done.
You can even get others to do the tedious part at their layer of abstraction so that you don’t have to anymore. Same as compilers, cpu design, or any other pet of the stack lower than the one you’re using.
But the way they learn to be wise in the context of using LLMs is to try using them and fail, just like all learning experiences. Companies insisting on the use of these tools seems logical to me when the assumption is that they will, once learned, be better than previous methods of working, but only with practice.
how do you know what you want if you didn't write a test for it?
I'm afraid what you want is often totally unclear until you start to use a program and realize that what you want is either what the program is doing, or it isn't and you change the program.
MANY programs are made this way, I would argue all of them actually. Some of the behaviour of the program wasn't imagined by the person making it, yet it is inside the code... it is discovered, as bugs, as hidden features, etc.
Why are programmers so obsessed that not knowing every part of the way a program runs means we can't use the program? I would argue you already don't, or you are writing programs that are so fundamentally trivial as to be useless anyway.
LLM written code is just a new abstraction layer, like Python, C, Assembly and Machine Code before it... the prompts are now the code. Get over it.
> how do you know what you want if you didn't write a test for it?
You have that backwards.
How do you know what to test if you don't know what you want?
I agree with you though, you don't always know what you want when you set out. You can't just factorize your larger goal into unit tests. That's my entire point.
You factorize by exploration. By play. By "fuck around and find out". You have to discover the factorization.
And that, is a very different paradigm than TDD. Both will end with tests, and frankly, the non TDD paradigm will likely end up with more tests with better coverage.
> Why are programmers so obsessed that not knowing every part of the way a program runs means we can't use the program?
I think you misunderstand. I want to compare it to something else. There's a common saying "don't let perfection be the enemy of good (enough)". I think it captures what you're getting at, or is close enough.
The problem with that saying is that most people don't believe in perfection[0]. The problem is, perfection doesn't exist. So the saying ends up being a lazy thought terminator instead of addressing the real problem: determining what is good enough.
In fact, no one knows every part of even a trivial program. We can always introduce more depth and complexity until we reach the limits of our physics models and so no one knows. Therefore, you'll have to reason it is not about perfection.
I think you are forgetting why we program in the first place. Why we don't just use natural language. It's the same reason we use math in science. Not because math is the language of the universe but rather that math provides enough specificity to be very useful in describing the universe.
This isn't about abstraction. This is about specification.
It's the same problem with where you started. The customer can't tell my boss their exact requirements and my boss can't perfectly communicate to me. Someone somewhere needs to know a fair amount of details and that someone needs to be very trustworthy.
I'll get over it when the alignment problem is solved to a satisfactory degree. Perfection isn't needed, we will have you discuss what is good enough and what is not
[0] likely juniors. And it should be beat out of them. Kindly
Imagine this as a voice chat interface between 2 human beings. This is basically pretending like the interaction of thought and perception of what is on screen is somehow gated by a “I have fully consciously absorbed everything on screen and decided my next action” perfect modeling where both the human ability to perceive and the computer ability to represent information are prefect.
No. That’s not how humans interact with computers. It’s not how humans interact with each other either.
Turn based games can be fun. They are not how we want to interact for day to day life.
Sorry but your idea comes across as one that makes the job of making the computer good to interact with easier, but not as one that makes the computer better to interact with as a human.
Please stop over simplifying a complex system. Humans are complex, the solution is t to be less human. It is for computers to become better at human interaction on human levels.
I’m not sure lacking comprehension of a comment and choosing to ignore that lack is better. Or worse: asking everyone to manually explain every reference they make. The LLM seems a good choice when comprehension is lacking.
I don’t want to declare machines to have emotion outright, but to call mimicry evidence of falsehood is also itself false.
reply