As an employee, I'm using Antigravity (CLI version) every day (because we can't use Claude) and it rules. I am way more productive than I was with CIDER-V, which itself was very nice.
When I went through YC in 2007 a founder whose name you know drunkenly told me at a party that Google Docs and Macbooks would have Microsoft out of business by 2012. Someone here told me in 2018 I was nuts to buy a gas-powered car because in less time than you would drive a car for, everyone will have switched to electric and there will be no gas stations left.
The impending deaths of most things are greatly exaggerated.
Search is doomed for people creating content that depends on organic search traffic because Google's AI is providing the content directly to people doing the search.
My decade old tech blog with 500+ posts now gets 10x less traffic than it did a few years ago and I'm actually on the fence on pulling the plug on my 10 year old business because traffic is so low it now costs me more to host video courses that I sell than I make per month from them. In turn this comes with other implications, such as maybe stopping my YouTube channel and no longer contributing to open source because paying bills has priority over hobbies. I enjoy spending time on these things and morally was always ok with giving away almost everything I do and learned for free, but income requirements are very quick to slap you into reality.
Both can be true? You can be doing really well and still have long term risk. Dethroning incumbents takes longer than people think and it’s possible that search growth goes 20%, 10%, -10%, -50%
This loss of easter eggs in software, along with the rise of enshittification, both have the same source:
Software used to be made by Programmers, with taste and opinions, according to their talent and personality, in solo or small groups. Now they are run by Project Managers and Data Scientists chasing KPIs through engagement measurement tools and AB tests.
Fun easter eggs cannot be justified. They are cut.
Personality doesn't move the metric as much as the mean / common denominator most basic thing. That's what ships.
All software and web content has gone this way in the last 13 years or so.
I can (barely, but sustainably) run Q3.5 397B on my Mac Studio with 256GB unified. It cost $10,000 but that's well within reach for most people who are here, I expect.
It would be plenty in-budget if the software part of local AI was a bit more full-featured than it is at present. I want stuff like SSD offload for cold expert weights and/or for saved/cached KV-context, dynamic context sizing, NPU use for prefill, distributed inference over the network, etc. etc. to all be things that just work for most users, without them having to set anything up in an overly error-prone way. The system should not just explode when someone tries to run something slightly larger; it should undergo graceful degradation and let them figure out where the reasonable limits are.
But it's well within the budget of a small company that wants to run a model locally. There are plenty of reasons to run one locally even if it's not state of the art, such as for privacy, being able to do unlimited local experiments, or refining it to solve niche problems.
There are way too many good uses of these models for local that I fully expect a standard workstation 10 years from now to start at 128GB of RAM and have at least a workstation inference device.
or if you believe a lot of HN crowd we are in AI bubble and in 10 years inference will be dirt cheap when all of this crashes and we have all this hardware in data centers and it won't make any sense to run monster workstations at home (I work 128GB M4 but not run inference, just too many electron apps running at the same time...) :)
> I work 128GB M4 but not run inference, just too many electron apps running at the same time.
This is somewhat depressing - needing a couple of thousand bucks worth of ram just to run your chat app and code/text editor and API doco tool and forum app and notetaking app all at the same time...
Crucial (Micron) sold 128GB of DDR5-5600 in SODIMM form for $280 a year ago. It would be slower tham the same amount on an M4 Mac, but still, I object to characterizing either as “a couple thousand bucks worth”.
Inference will be dirt cheap for things like coding but you'll want much more compute for architectural planning, personal assistants with persistent real time "thinking / memory", as well as real time multimedia. I could put 10 M4s to work right now and it won't be enough for what I've been cooking.
Just have to reclassify it as non-frivolous then. $10k's not a lot for something as important as a car, if you live somewhere where one is required. Housing is typically gonna cost you more than $10k to own. I probably spend close to $10k for food for 1.5 years.
So if you just huff enough of the AI Kool aid, you too can own a Mac Studio. Or an M5 MacBook. Or a dual 3090 rig.
For some reason you were being downvoted but I enjoy hearing how people are running open weights models at home (NOT in the cloud), and what kind of hardware they need, even if it's out of my price range.
yeah, none of the government is "protected" considering whose in charge of every aspect of it. Social engineering is the biggest technologic and domestic weakness.
Bizarre watching people talk about the insecurity of technology.
reply