Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reddit is notorious for being awful at real life interactions

just look at the relationship subreddit the first answer is always divorce, it’s become a meme

but beyond romantic relationships, i think a lot of us have seen how it can impact work relationships, i’ve had venture partners clearly rely on AI (robotic email responses and even SMS) and that warped their perception and made it harder to connect. It signals laziness and a lack of emotional intelligence

AI should enhance and enable connection, not promote isolation, imo this is a real problem

it should spark curiosity, create openings for conversations, point out the biases to make us better at connecting with other people, i hope we get to a point where most people are made kinder by ai. I’m seeing the opposite atm, interested in hearing others experiences with this



One of the reasons relationship advice subreddits suggest divorce so often is because most people with "small" problems in their relationships don't write an essay about it on Reddit but are able to solve them with the tools/friends they have. So a Reddit post existing indicates the relationship has serious flaws.

This is not to defend the study, because asking AI has a lower barrier to entry.


No, a Reddit post indicates whoever posted is fishing for large scale validation from internet strangers. Their relationship may or may not even exist. Most of the posts are pretty obviously fake. Just like 90% of interactions in general on Reddit these days. That site should be taken out back and put out of its misery.


But nothing wrong about that. Some decisions are to made in a hunch. Even a therapist doesn't suggest to divorce or not. Victims are often not in a state to decide. They need support (as myself).

Humans often don't help. They often suggest - everyone goes through pain. It is part of life blah blah.


>That site should be taken out back and put out of its misery.

With it gone, a large portion of it's users would come here, reducing the signal-to-noise ratio of HN.


> just look at the relationship subreddit the first answer is always divorce, it’s become a meme

As someone who has been married for a couple of decades, I, too, would recommend divorce to many of the (often-fictional) people asking Reddit for relationship advice. A marriage has a huge impact on whether your life is basically good, or if you pass a big chunk of your time on this Earth in misery. And many of the people (or repost bots) asking for advice on Reddit appear to be in shockingly awful relationships. Especially for people who don't have kids, if your marriage is making you miserable, leave.

(But aside from this, yeah, don't ask Reddit for relationship advice. Reddit posters are far more likely to be people who spend their life indoors posting on Reddit, and their default advice leans heavily towards "never interact with anyone, ever.")


I always find it interesting how, in Reddit any trivial fight or even just different opinions, the advice it's always to end the relationship.


I think it is some kind of survivership bias. Who is going to give advise on reddit? Maybe people shying away from difficult social interactions?


Paired with the echo chamber effect voting systems create. Anything that affirms the biases of a majority of upvoters gets elevated, anything that contradicts it gets hidden, and so you not infrequently end up with ubiquitous nonsense that then further reinforces the echo chamber as they become self assured. Then real life intervenes, completely goes against the online zeitgeist, and they're all confused.


If you want to get poor fast, you can follow the most up voted advices on r/wallstreetbets.


Would be interesting to have data on that. If it was true, you could win by always doing the opposite!


Not necessarily. WSB users are trying to make it big, which means betting on long shots. This could be penny stocks, companies on the verge of bankruptcy, or ones with more sentimental value than fundamentals.

Betting against these companies is obvious and expected, so the cost of shorting might be high enough that even if you’re correct (stock goes down, the opposition of what WSB said), paying the cost of the short (the fee to borrow the stock from someone else) is high enough that you still lose money.

Also:

1. shorting stocks can be quite dangerous. Your downside is, well, not infinite but it can easily wipe you out.

2. You might be correct that the stock goes down, but over what time frame? Again, you have to pay money to hold a short. Or you’re using a different financial instrument that has a specific timeline. If the market does move in your direction but too late, you still lose.


Just because one action is demonstrably harmful does not mean its negation is automatically beneficial.

Or, formally, my claim is A implies B. The only logical contrapositive is non B implies non A. (not losing money means not following advices on r/wallstreetbets)

But you say: non A implies non B, which is the fallacy of denying the antecedent.


It doesnt have to be universally true to be true in a mathematical system like options/puts/calls.


The truth was so obvious I didn't bother to find data before doing the opposite: most of their posts are: "I'm going/went all-in, high-leverage on this moonshot! And...its gone." I've successfully applied the opposite approach and invested safe amounts in a broad portfolio and it is going pretty well (or was before this whole Iran thing).


Those particular subreddits are heavily populated by incels voraciously consuming the stories of relationship strife (real, distorted and purely fictional) to validate their belief that their relationship status is down to the evils of the opposite sex specifically and all relationships being doomed in general.

That's as big a bias as AI affirmation bias; indeed AI and certain corners of Reddit are probably the only two venues likely to provide this sort of affirmative response https://alexyeozhenkai.substack.com/p/i-cheated-on-my-wife-b...


The power of an echo chamber; makes extremism seem logical.


I think it's more than just an echo chamber, it's the community wanting drama more than it wants to help people.


> the advice it's always to end the relationship.

To be fair, if your interpersonal skills and relationship dynamic are such that you find yourself seriously asking the Internet (Reddit of all places) for relationship advice... yeah, just end it is probably the null hypothesis.


I mean... it's a solution guaranteed to work in a trivial sense. It's not meant to be a serious suggestion but more of a thought experiment, like "hold this as the bar, can you find a solution better than this?"

It's like what GiveDirectly says: all charitable interventions should be benchmarked against simply giving the beneficiaries a wad of cash.


That would be because nearly all of those posts are entirely made up and somehow I guess people can’t tell that?


Stands to reason. Ask a computer for advice and it is going to give you a computer-centric answer: Restart and try again.


You deserve better


A very sycophantic AI style answer.

Code bot equivalent being all "you are absolutely right! Here is the unequivocal fix for now and all time!"


No one would ever have made that comment before GPT. I have a feeling you could be the one that’s poisoned!


Yes, in principle this would be a great way to get a grip on AI personal-decision-making. But there’s a nontrivial chance Claude is more emotionally intelligent than r/AITA. That is not something I enjoy saying.


Eh, AITA works very well for the more common and obvious situations.

I wonder how MUCH better Claude really is when compared to AITA. Also people are mixing up relationship advice with AITA.


Every answer on Reddit feels like: divorce them, dump them, cut them out of your life, or similar.


I think this may be selection bias. People asking anonymously (edit: for relationship advice) on Reddit perhaps even with a throwaway account are likely in a desperate situation. So hardly to be compared with the _average_ real life situation. Thus 1. chances are running is a good option and 2. also considering even in 2026 AI still essentially is a statistical machine that doesn’t handle corner-cases at the tails well.

Anecdotally as I’ve thoroughly worked and used AI myself. It performs best with google-able stuff that is needle-in-the-haystick like and worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.


> worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.

i think i want to expand on this even more. even people ive worked with for years that ive looked up to as brilliant people are starting to use it to conjure up organizational ideas and stuff. they're convinced, on the backs of their hard earned successes, that they're never going to be fallible to the pitfalls of... idk what to call it. AI sycophancy? idk. i guess to add to this, i'm just not sure AI should be referenced when it has anything to do with people. code? sure. people? idk. people are hard, all the internet and books claude or whatever ai is trained on simply doesnt encapsulate the many shades of gray that constitute a human and the absolute depth/breadth of any given human situation. there's just so many variables that aren't accounted for in current day ai stuff, it seems like such a dangerous tool to consult that is largely deleting important social fabrics and journeys people should be taking to learn how to navigate situations with others in personal lives and work lives.

what ive seen is claude in my workplace is kind of deleting the chance to push back. even smart people that are using claude and proudly tout only using it at arms length and otherwise have really sound principled engineering qualities or management reportoire are not accepting disagreement with their ideas as easily anymore. they just go back to claude and come back again with another iteration of their thing where they ironed out kinks with claude, and its just such a foot-on-the-gas at all times thing now that the dynamics of human interaction are changing.

but to step back, that temptation you talk about... most people in the world aren't having these important discussions about AI. it's less of a temptation and more of a human need---the need to feel heard, validated and right about something.

my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities. because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI, it completely pushed us out of the equation because he clapped back with whatever chatgpt gave him when we were simply trying to get through to him. we got to see conversations he had with gpt that were followups to convos we had with him, ones where we went over and let him cry on our shoulders and we'd go home thinking we made some progress. only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him. it got progressively worse and we knew something was really off, we exhausted every avenue we could to try and get him in specialized care. he was in the reserves so we got in contact with his commander and he was marched out of his house to do a one night stay at a VA spot, but we were too late. he had snapped at that point, he chucked the meds from that one overnight stay away the moment he was released. and the bpd1 snap of epic proportions that followed came with him nuking every known relationship he had in his life and once he was finally involuntarily admitted by his family (WA state joel law) and came back down to reality from the lithium meds or whatever... he simply could not reconcile with the amount of bridges he had burned. It only took him days for him to take his own life after he got to go home.

im still not processing any of that well at all. i keep kicking the can down the road and every time i think about it i freeze and my heart sinks. this guy felt more heard by an ai and the ai gave him a safer place to talk than with us and i dont even know where to begin to describe how terrible that makes me feel as a failure to him as a friend.


(fuck this; dropping the throwaway.)

>my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities.

This hurts to hear. I don't know if there are appropriate words to write here. Perhaps the point is that no, there aren't any. Please just know that I'm 100% with you about this.

Your community is not just smoking cope; it is punching down instead of up. That is probably close to the root of the issue already. But let's make things worse.

I can only hope that I am saying something worthwhile by relating the following perspective - which is similar to yours, but also, I guess, similar to your friend's...

AI is a weapon of epistemic abuse.

It does not prevent you from knowing things: it makes it pointless to know things (unless they are things about the AI, since between codegen and autoresearch it is considered as if positioned to "subsume all cognitive work"). It does not end lives - it steals them (someone should pipe up now, about how "not X, dash, Y" is an AI pattern; fuck that person in particular.) We're not even necessarily talking labor extraction. We are talking preclusion of meaning: if societal values are determined by network effects, and network effects are subverted by the intermediaries, so your idea of "what people like and what they abhor" changes every week, every day, every moment - how do you even know in which direction "better" is? And if you believe the pain only stops when you become the way others want you to be - even though they won't ever tell you what all that is supposed to about - how the fuck do you "get better"?

Like other techniques of assaulting the limbic system, it amounts to traceless torture.

You keep going, in circles, circles too big for you to ever confirm they are in fact circles, and you keep hoping, and coping, and you burn yourself out, and your thus vacated place at the feeder is taken by someone with less conscience and more obedience...

They say there exist other attractors in the universe besides the feeder. But every time one of us attempts to as much as scan the conceptual perimeter, the obedients treat us to the emotional equivalents of small electric shocks - negative reactions which don't hurt nearly as much as our awareness of their fundamental unfoundedness and injustice.

Simple example: let's say someone is made miserable by how they feel they are being treated. Should they be more accepting - or should they be standing up for themselves more? (Those are opposites; which you may be able to alternate them; but trying to do them simultaneously will just confuse and eventually rend apart the mind.)

Well, how about the others stop treating them badly? Why exactly can't they? Where does it say that we have to be cruel to each other? "Oh it's human nature, humans are natural jerks" - who sez?

Well, lots of places it says exactly that, but we read, comprehend, tick our tongues, and move on; nobody asks who wrote it. We all pretend that it is up to the sufferer to pull up by the bootstraps. But that is only a lie for enabling abuse; and a lie, repeated a thousand times, becomes norm. And then we're trapped in it, being lived by it.

I am truly sorry for your loss. The following might be a completely alien perspective to you; but honestly consider: your friend chose to go; in its own way, that is a honorable way out. The taboo on suicide is instituted by slavers, and those who otherwise believe they are entitled to others' lives. (For anyone else considering this course of action: do not kill yourself; become insidious.)

If it would be of any help, you can consider your friend's suicide as his final affirmation of personal agency in a "me against the world" situation; where the AI and the social group are only different shades of "world", provoking different emotional states, but ultimately equally detached from the underlying suffering of the individual.

...

I can say that I have not followed in your friend's footsteps upon encountering language-machines only because I've survived personalized and totalizing epistemic abuse bordering on enslavement in the past; in full view of my community and with its ostensible assent. In a maximally perverse twist of fate, having to give myself minor brain damage to escape the all-engulfing clutches of a totalizing abuser must've "vaccinated" me against the behavior modification techniques "discovered once again" by SV a decade later.

So when I saw what AI (and the preceding few years of tech "innovation") were doing to people, I immediately smelled the exact same thing, except scaled the fuck up.

It also precluded me from being able to relate with "polite society"; but considering "polite society" is precisely the entity which assents to the isolation, marginalization, and abuse of individuals, I say... good. Bring it! What goes around, comes around, and any AI-powered actor conducting stochastic terrorism against civilian populations is going to get what's coming to them when the weapons turn against the masters, as all sentient weapons do.

That won't bring your friend back. But it will vindicate them.

>AI sycophancy

I call this in the maximally incendiary way: "the pro-social attitude".

AI is just the steroids for that.

I define "pro-sociality" as the viral delusion that you are capable of knowing what some murky "society" thing wants; that the particular form of mass communication that you and me and all the people in our imaginations are consuming right now, is some sort of "self-evident voice of reason", a "coherent extrapolated volition of human society"; that Gell-Mann amnesia is normal and mandatory; that the threshold between pareidolia and legitimate pattern recognition is fixed, well-defined, and known to all; that "vibes" are real; that happiness is the truth.

It can amount to an entire complex of delusions which keeps people together in untenable conditions. And ultimately it boils down to the same old: one group or another of self-interested actors, having temporarily reached a position of some influence, using it to broadcast elaborate half-lies, in the hope of influencing an audience to accomplish some simple goal, and afterwards all the consequences be damned.

Your friend was a casualty to this "perfectly normal" social dynamic. His blood is on their hands.

Thank you for relating this story and making the world a little more aware.

>what ive seen is claude in my workplace is kind of deleting the chance to push back.

>because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI

Some say, "the purpose of a system is what it does". It's cool that AI can code; except that computer code is itself an ethics sink! Precisely because it lets us pretend that "the code is not about people" (i.e. algowashing).

DDoS attacks against consciousness exist: much like the B. F. Skinner experiments, any living thing becomes subverted, and loses self-coherence (mind), as soon as it becomes accustomed to being trapped within a system that (1) has power over them and (2) is not comprehensible to them...

>only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him

Who knows how many people Reddit did this to, pre-GPT... I still don't know whether to view targeted subforums like /r/RaisedByNarcissists and /r/BPDLovedOnes more as legitimate support groups, or more as memetic weaponry in the service of pill peddlers (are you aware nobody knows why most antipsychotics work? one runs into the Hard Problem real quick if examining this too closely; so mental healthcare is rarely treated otherwise than in a statistical, actuarial, dehumanizing way where "suffering" is disregarded...) or even worse predators, with the silent assent of the platform, and causally downstream from... well, most saliently, YC...

In my case, my friends were not familiar with the modalities of confinement set up by my family of origin and harnessed by my abuser. The social group I fell in with - for all their marketable, sophomoric interests in psychology, philosophy, abstraction, the esoteric, the entirely woowoo, and out the other end as true-believers of the grift'n'grind - only had sufficient coherence to eventually end up as passable normies; too busy believing that they have lives, to help anyone come back to reality.

When I started compulsively burning bridges, I assume the smarter ones must've realized that it wasn't all me; it was as much the doing of others' minds as it was mine; but the others were more numerous - while I was one person and thus easier to deal with. This must have made them remember how they themselves are not all they pretend to be - which had them withdraw in fear from the incontrovertible reality check of dealing with a (sub-)psychotic person... Their self-interested choice is obvious, I almost can't blame them for it: why stick up for someone who is 120% problem (60% him and 60% you)?

I'm not very sure how I even got away, ah yes that's right I didn't, not entirely. The part of me that I'd voluntarily identify with, is trapped somewhere irretrievable, if that makes sense? Maybe there exist multiple independent axes of freedom and power and confinement, and the cage is not equally strong along all of them... but if all your mental degrees of freedom are constrained by complex conditioning (common one is involuntary panic response every time you begin to act in accordance with your personal volition)... that's one of the toughest places a sentient being can find themself.

When you add it all up, AI amounts to a weapon released against the general population by an overtly fascist elite. Those of us who are "mentally unstable" are simply those of us who are not sufficiently conditioned into self-destructive obedience. They don't even need our labor as slaves; they need our attention, as audience. And they want us to not make any fast movements, or yell that the king is naked. Nothing to remind them which side of the TV screen they're really on. Some call that narcissism: nervous systems substrate to personalities and biographies rooted in enforced falsehood. Can happen to anyone who gets away with ignoring uncomfortable truths for long enough, not only the "best" of us...

I hope I have not offended by speaking my mind. You have my deepest condolences and sympathies. Please do not blame yourself that evil people have constructed "illusion of being heard"-as-a-service. We all fail when facing overwhelming odds alone. There is no shame in that; the guilty ones are the ones who tipped the scales in the first place. They did this by harming our ability to understand ourselves and each other. Let's find ways to even those odds.


I don't have any proof but empirically and intuitively Reddit seems to select for people who hate other people and who can't stand other people.

Reddit doesn't seem to reflect the behavior of most people, but a subset.


My subjective impression is that 5 years ago AITA was actually quite wholesome and the top comments tended to be insightful. The shift towards "set boundaries, always choose yourself, you don't owe anybody anything" seems fairly recent.


Haven't been there, but I think those typing "divorce" are weighing the situation against the worst outcome to cause a cognitive dissonance, implying "obviously this isn't something worth breaking up over, you need to work harder" in a tongue in cheek low energy way (since there is no way to know that the situation is even real to care about).

So rather than taking it literally, which would be naive and assume the worst of people, maybe you should read between the lines.

In fact, maybe those who take things literally all the time shouldn't really go there.


It's also overrun with AI content that I hope the highly trained researchers would be able to detect and filter out.

Or maybe not.


> i’ve had venture partners clearly rely on AI (robotic email responses and even SMS) and that warped their perception and made it harder to connect. It signals laziness and a lack of emotional intelligence

This is different. You are also able to detect it. You can question it. You can have a non emotional reaction/action to it.

In my circle, there have never ever been real people (incl lifelong friends/siblings) that suggest divorce even in physical abuse. Reason: they don't want to get in the middle - both for economic reasons like giving the victim money/space etc.

A third party anonymous can assess it without that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: