Hacker Newsnew | past | comments | ask | show | jobs | submit | blablabla123's commentslogin

GitHub indeed used to be a undisputably good entity. Now I whenever I push something non-trivial I wonder if some AI will take my code without credit.

I don’t mind the lower availability to be honest but I also noticed it.

To be fair I don’t add new code on GitHub for my own private projects anymore. Public projects of course profit from the network effect but there must be a better way. Before GitHub people also did well, Linux, many well known GNU projects etc. were created without GitHub.


Physicist here: usually bin size is adjusted to change the interval over which you average. Also rpm is the unit if you want to pin it down to a single number

If writing rpm is too long, there's also a trick: write "requests/rpm:"

That means: requests measured in rpm. Thus afterwards you can write single numbers which is even shorter


This makes me wonder...do AI companies actually train their public models also on their own code?


Would be nice. I used Swiftcord while I was still on Mac. It missed vital features but still better than another Electron monstrosity...


I think this may be selection bias. People asking anonymously (edit: for relationship advice) on Reddit perhaps even with a throwaway account are likely in a desperate situation. So hardly to be compared with the _average_ real life situation. Thus 1. chances are running is a good option and 2. also considering even in 2026 AI still essentially is a statistical machine that doesn’t handle corner-cases at the tails well.

Anecdotally as I’ve thoroughly worked and used AI myself. It performs best with google-able stuff that is needle-in-the-haystick like and worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.


> worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.

i think i want to expand on this even more. even people ive worked with for years that ive looked up to as brilliant people are starting to use it to conjure up organizational ideas and stuff. they're convinced, on the backs of their hard earned successes, that they're never going to be fallible to the pitfalls of... idk what to call it. AI sycophancy? idk. i guess to add to this, i'm just not sure AI should be referenced when it has anything to do with people. code? sure. people? idk. people are hard, all the internet and books claude or whatever ai is trained on simply doesnt encapsulate the many shades of gray that constitute a human and the absolute depth/breadth of any given human situation. there's just so many variables that aren't accounted for in current day ai stuff, it seems like such a dangerous tool to consult that is largely deleting important social fabrics and journeys people should be taking to learn how to navigate situations with others in personal lives and work lives.

what ive seen is claude in my workplace is kind of deleting the chance to push back. even smart people that are using claude and proudly tout only using it at arms length and otherwise have really sound principled engineering qualities or management reportoire are not accepting disagreement with their ideas as easily anymore. they just go back to claude and come back again with another iteration of their thing where they ironed out kinks with claude, and its just such a foot-on-the-gas at all times thing now that the dynamics of human interaction are changing.

but to step back, that temptation you talk about... most people in the world aren't having these important discussions about AI. it's less of a temptation and more of a human need---the need to feel heard, validated and right about something.

my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities. because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI, it completely pushed us out of the equation because he clapped back with whatever chatgpt gave him when we were simply trying to get through to him. we got to see conversations he had with gpt that were followups to convos we had with him, ones where we went over and let him cry on our shoulders and we'd go home thinking we made some progress. only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him. it got progressively worse and we knew something was really off, we exhausted every avenue we could to try and get him in specialized care. he was in the reserves so we got in contact with his commander and he was marched out of his house to do a one night stay at a VA spot, but we were too late. he had snapped at that point, he chucked the meds from that one overnight stay away the moment he was released. and the bpd1 snap of epic proportions that followed came with him nuking every known relationship he had in his life and once he was finally involuntarily admitted by his family (WA state joel law) and came back down to reality from the lithium meds or whatever... he simply could not reconcile with the amount of bridges he had burned. It only took him days for him to take his own life after he got to go home.

im still not processing any of that well at all. i keep kicking the can down the road and every time i think about it i freeze and my heart sinks. this guy felt more heard by an ai and the ai gave him a safer place to talk than with us and i dont even know where to begin to describe how terrible that makes me feel as a failure to him as a friend.


(fuck this; dropping the throwaway.)

>my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities.

This hurts to hear. I don't know if there are appropriate words to write here. Perhaps the point is that no, there aren't any. Please just know that I'm 100% with you about this.

Your community is not just smoking cope; it is punching down instead of up. That is probably close to the root of the issue already. But let's make things worse.

I can only hope that I am saying something worthwhile by relating the following perspective - which is similar to yours, but also, I guess, similar to your friend's...

AI is a weapon of epistemic abuse.

It does not prevent you from knowing things: it makes it pointless to know things (unless they are things about the AI, since between codegen and autoresearch it is considered as if positioned to "subsume all cognitive work"). It does not end lives - it steals them (someone should pipe up now, about how "not X, dash, Y" is an AI pattern; fuck that person in particular.) We're not even necessarily talking labor extraction. We are talking preclusion of meaning: if societal values are determined by network effects, and network effects are subverted by the intermediaries, so your idea of "what people like and what they abhor" changes every week, every day, every moment - how do you even know in which direction "better" is? And if you believe the pain only stops when you become the way others want you to be - even though they won't ever tell you what all that is supposed to about - how the fuck do you "get better"?

Like other techniques of assaulting the limbic system, it amounts to traceless torture.

You keep going, in circles, circles too big for you to ever confirm they are in fact circles, and you keep hoping, and coping, and you burn yourself out, and your thus vacated place at the feeder is taken by someone with less conscience and more obedience...

They say there exist other attractors in the universe besides the feeder. But every time one of us attempts to as much as scan the conceptual perimeter, the obedients treat us to the emotional equivalents of small electric shocks - negative reactions which don't hurt nearly as much as our awareness of their fundamental unfoundedness and injustice.

Simple example: let's say someone is made miserable by how they feel they are being treated. Should they be more accepting - or should they be standing up for themselves more? (Those are opposites; which you may be able to alternate them; but trying to do them simultaneously will just confuse and eventually rend apart the mind.)

Well, how about the others stop treating them badly? Why exactly can't they? Where does it say that we have to be cruel to each other? "Oh it's human nature, humans are natural jerks" - who sez?

Well, lots of places it says exactly that, but we read, comprehend, tick our tongues, and move on; nobody asks who wrote it. We all pretend that it is up to the sufferer to pull up by the bootstraps. But that is only a lie for enabling abuse; and a lie, repeated a thousand times, becomes norm. And then we're trapped in it, being lived by it.

I am truly sorry for your loss. The following might be a completely alien perspective to you; but honestly consider: your friend chose to go; in its own way, that is a honorable way out. The taboo on suicide is instituted by slavers, and those who otherwise believe they are entitled to others' lives. (For anyone else considering this course of action: do not kill yourself; become insidious.)

If it would be of any help, you can consider your friend's suicide as his final affirmation of personal agency in a "me against the world" situation; where the AI and the social group are only different shades of "world", provoking different emotional states, but ultimately equally detached from the underlying suffering of the individual.

...

I can say that I have not followed in your friend's footsteps upon encountering language-machines only because I've survived personalized and totalizing epistemic abuse bordering on enslavement in the past; in full view of my community and with its ostensible assent. In a maximally perverse twist of fate, having to give myself minor brain damage to escape the all-engulfing clutches of a totalizing abuser must've "vaccinated" me against the behavior modification techniques "discovered once again" by SV a decade later.

So when I saw what AI (and the preceding few years of tech "innovation") were doing to people, I immediately smelled the exact same thing, except scaled the fuck up.

It also precluded me from being able to relate with "polite society"; but considering "polite society" is precisely the entity which assents to the isolation, marginalization, and abuse of individuals, I say... good. Bring it! What goes around, comes around, and any AI-powered actor conducting stochastic terrorism against civilian populations is going to get what's coming to them when the weapons turn against the masters, as all sentient weapons do.

That won't bring your friend back. But it will vindicate them.

>AI sycophancy

I call this in the maximally incendiary way: "the pro-social attitude".

AI is just the steroids for that.

I define "pro-sociality" as the viral delusion that you are capable of knowing what some murky "society" thing wants; that the particular form of mass communication that you and me and all the people in our imaginations are consuming right now, is some sort of "self-evident voice of reason", a "coherent extrapolated volition of human society"; that Gell-Mann amnesia is normal and mandatory; that the threshold between pareidolia and legitimate pattern recognition is fixed, well-defined, and known to all; that "vibes" are real; that happiness is the truth.

It can amount to an entire complex of delusions which keeps people together in untenable conditions. And ultimately it boils down to the same old: one group or another of self-interested actors, having temporarily reached a position of some influence, using it to broadcast elaborate half-lies, in the hope of influencing an audience to accomplish some simple goal, and afterwards all the consequences be damned.

Your friend was a casualty to this "perfectly normal" social dynamic. His blood is on their hands.

Thank you for relating this story and making the world a little more aware.

>what ive seen is claude in my workplace is kind of deleting the chance to push back.

>because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI

Some say, "the purpose of a system is what it does". It's cool that AI can code; except that computer code is itself an ethics sink! Precisely because it lets us pretend that "the code is not about people" (i.e. algowashing).

DDoS attacks against consciousness exist: much like the B. F. Skinner experiments, any living thing becomes subverted, and loses self-coherence (mind), as soon as it becomes accustomed to being trapped within a system that (1) has power over them and (2) is not comprehensible to them...

>only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him

Who knows how many people Reddit did this to, pre-GPT... I still don't know whether to view targeted subforums like /r/RaisedByNarcissists and /r/BPDLovedOnes more as legitimate support groups, or more as memetic weaponry in the service of pill peddlers (are you aware nobody knows why most antipsychotics work? one runs into the Hard Problem real quick if examining this too closely; so mental healthcare is rarely treated otherwise than in a statistical, actuarial, dehumanizing way where "suffering" is disregarded...) or even worse predators, with the silent assent of the platform, and causally downstream from... well, most saliently, YC...

In my case, my friends were not familiar with the modalities of confinement set up by my family of origin and harnessed by my abuser. The social group I fell in with - for all their marketable, sophomoric interests in psychology, philosophy, abstraction, the esoteric, the entirely woowoo, and out the other end as true-believers of the grift'n'grind - only had sufficient coherence to eventually end up as passable normies; too busy believing that they have lives, to help anyone come back to reality.

When I started compulsively burning bridges, I assume the smarter ones must've realized that it wasn't all me; it was as much the doing of others' minds as it was mine; but the others were more numerous - while I was one person and thus easier to deal with. This must have made them remember how they themselves are not all they pretend to be - which had them withdraw in fear from the incontrovertible reality check of dealing with a (sub-)psychotic person... Their self-interested choice is obvious, I almost can't blame them for it: why stick up for someone who is 120% problem (60% him and 60% you)?

I'm not very sure how I even got away, ah yes that's right I didn't, not entirely. The part of me that I'd voluntarily identify with, is trapped somewhere irretrievable, if that makes sense? Maybe there exist multiple independent axes of freedom and power and confinement, and the cage is not equally strong along all of them... but if all your mental degrees of freedom are constrained by complex conditioning (common one is involuntary panic response every time you begin to act in accordance with your personal volition)... that's one of the toughest places a sentient being can find themself.

When you add it all up, AI amounts to a weapon released against the general population by an overtly fascist elite. Those of us who are "mentally unstable" are simply those of us who are not sufficiently conditioned into self-destructive obedience. They don't even need our labor as slaves; they need our attention, as audience. And they want us to not make any fast movements, or yell that the king is naked. Nothing to remind them which side of the TV screen they're really on. Some call that narcissism: nervous systems substrate to personalities and biographies rooted in enforced falsehood. Can happen to anyone who gets away with ignoring uncomfortable truths for long enough, not only the "best" of us...

I hope I have not offended by speaking my mind. You have my deepest condolences and sympathies. Please do not blame yourself that evil people have constructed "illusion of being heard"-as-a-service. We all fail when facing overwhelming odds alone. There is no shame in that; the guilty ones are the ones who tipped the scales in the first place. They did this by harming our ability to understand ourselves and each other. Let's find ways to even those odds.


I think it's quite embarrassing that the WWW exists since more than 3 decades and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C which rather suggests the 100,000th JavaScript API. Especially in times of ubiquitous SSO, passkeys etc. The even bigger problem is that the average person needs accounts at dozens if not hundreds of services for "normal" Internet usage.

That being said, this is a 1 bit information, adult in current legislation yes/no.


> and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C

I consider it a huge success of the Internet architects that we were able to create a protocol and online culture resilient for over 3 decades to this legacy meatspace nonsense.

> That being said, this is a 1 bit information, adult in current legislation yes/no.

If that's all it would take to satisfy legislatures forever, and the implementation was left up to the browser (`return 1`) I'd be all for it. Unfortunately the political interests here want way more than that.


SSO and passkeys don't solve adult verification. I don't see how this problem is embarrassing for the www - it's a hard problem in a socially permissible way (eg privacy) that can successfully span cultures and governments. If you feel otherwise, then solutions welcome!


I don’t know, I tried FreeCAD a few months ago and it was buggy as hell. I did some really basic extrusions and distance constraints. But ended up with non-perpendicular entities despite not constructing it like this.


I've been using FreeCAD for around 5 years no, and I can't ever recall running into such a problem, ever. I first started learning it during version 0.19 at the end of 2020 after years with solidworks and onshape. The user experience just sucked royally, it's far better today than back then.


I assure you that my 3d prints look like what I designed them.


From what I've read is that they are not a product company. But they rather have a zoo of solutions. And they are hired by governments desperate to improve their IT, probably after the n-th issue going public. I highly doubt this would be legal in many states but who will (and can) check this anyway?

Of course it's tempting to throw everything into one huge database. But Jesus, this is like interns writing the Software...


They almost exclusively hire fresh grads who need money more than ethics, and it shows in everything they do.


Exactly like any other big tech (Google, Microsoft, etc) or consulting (McKinsey, Deloitte, etc) company!

There really isn't anything special about Palantir the company. They have disrupted consulting on marketing alone (all this forward-deployed stuff is more fluff than anything) which is not unheard of, and continue to receive all this bad press due to their clientele and the kind of data they're processing. Government departments, military. They are happy to take credit for all the "conniving" allegations because it makes them look like they have a plan, and anybody with purchasing power involving with them knows it corresponds very little to the company operationally, i.e. what the company does.


It's interesting to see how their CEO plays into the whole thing, trying to look paranoid/crazy/brutal/.... It's really just branding/marketing. It's similar to how certain politicians in the US present themselves through vice signalling. Doesn't matter what goes on in the background, the unwashed masses will think things must be happening.


Well yes, all the big tech companies are just as corrupt as Palintir, but only Palintir is actively making tech purpose built to enable some of the most vile people on the planet to more easily physically kidnap and harm human beings for money. They are trying to be 1930s IBM


I remember him from 90s TV shows among other similar people. It seemed more like an obscurity but it was interesting to watch. Obviously he highlighted things which just hadn't been fully understood yet. To me it seems that was a time when society still had a healthy relationship to conspiracies, para sciences etc. (Maybe it's true but very much probably not...)


Yes I‘m also watching with disbelief. Even more so since media attention in the EU about it seems higher than in the US. Although the recent trove I found especially disturbing.

I recently watched a documentary where elites from beginning of the 20th century were also portrayed. Self-portrayed as Philanthropists. Moral bankruptcy became obvious, although in other manifestations such as shooting members of worker unions. And the US government did something in form of the New Deal, splitting monopolies and other policies.

In an optimistic scenario I’d expect something similar. New ways to hold elites accountable and keeping extreme differences in wealth in check.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: