Hacker Newsnew | past | comments | ask | show | jobs | submit | bayesnet's commentslogin

That’s bonkers. I’m on the east coast (not nyc) and a quarter pounder medium meal is $10.49. Meanwhile Five Guys is $20.29 for a regular meal.

Mac prices on eBay are sort of bizarre. Back when I was looking (when you could still purchase a new one in a reasonable timeframe) many of the higher end listings cost as much or more (!) then just getting them from Apple. I ended up buying an Apple certified refurbished Mac Studio for less than the comparable eBay listing.

Not sure who’s buying these or if it’s just people dreaming about finding a rube.


I'm beating a dead horse here but the challenge is a11y. Chromium wrappers get a11y for free; bespoke UI frameworks must implement accesskit (or something) which is a lot of work and something that (imo sadly) many small teams decide is not worth the investment.

I took a look at gpui-component a while ago when assessing GPUI for a project I was working on. IANAL but was dissuaded because it's almost certainly not compliant with the Zed license--gpui-component "borrows" gpui code patterns lifted straight from the main zed repo, which therefore must be AGPL/GPL (unlike the gpui-only which is Apache IIRC). Caveat emptor (caveat user?).

I think it was even featured and praised in a recent zed blog post

The existence of a soundness bug in the typechecker doesn’t refute the value of soundness as a language design contract.

If anything it’s the opposite: issues demonstrated by cve-rs are _language bugs_ and are _fixable_ in principle. “Safe Rust should be memory-safe” is a well-defined, falsifiable contract that the compiler can be measured against. Meanwhile memory unsafety is a feature of the semantics of C++ and so it would be absurd to file a bug against gcc complaining that it compiled your faulty code.


The language design contract is unsafe by default. In memory, types and concurrency. What are you talking about? There are unsafe blocks all over the stdlib. And concurrency safety would need to get rid of their blocking IO, which they haven't even acknowledged.


> The language design contract is unsafe by default

False. The language design safe by default, something that you can confirm super easily doing just the Rust tutorials and compare the same with C or C++.

Read the repo well:

   cve-rs implements the following bugs in safe Rust:

   Use after free
   Buffer overflow
   Segmentation fault
NOT REFUTE IT.

> There are unsafe blocks all over the stdlib

Unsafe blocks is not the same that unsafe code. Are marked areas that are required to do escape automated checks, and there, you are at the level of a C/C++ programmer (where in that languages ALL THE CODE IS MARKED UNSAFE).

If you complaint against that, is the same as complaint against ALL THE CODE writer on C/C++.

---

One thing important to understand about Rust: Rust is a system language and SHOULD be able to implement everyting including, Buffer overflow, Use after free , Segmentation fault and such. You should be able to implement a terrible OS, malware, faulty drivers, etc, minimally because that is required to test safe programs!

(example: Deterministic Simulation Testing https://turso.tech/blog/introducing-limbo-a-complete-rewrite...).

But what Rust gives is that not assume that you want to do it for most programs.


> There are unsafe blocks all over the stdlib

Physics is unsafe. Something, somewhere needs to provide the safe core.

> And concurrency safety would need to get rid of their blocking IO, which they haven't even acknowledged.

Is your position that blocking IO can't be compatible with concurrency safety? That's a strange claim. Can you explain?


Sure, but then they shout all over how safe they are. They got rid of the safeties pretty late, when they ripped out their GC, but kept their false promises all over.

No, that's common knowledge. I fixed concurrency safety by forbidding blocking IO. Others also. Maybe there are other ways, but never heard of other ways.


> They got rid of the safeties pretty late, when they ripped out their GC, but kept their false promises all over.

This seems like a non-sequitur to me? The presence/absence of a GC is not dispositive with respect to determining "safety", especially when the GC itself involves unsafe code.


Have you ever seen a GC system with memory unsafeties? I cannot remember any


I think so, assuming I'm thinking of the same thing you are, but I think that's somewhat besides the point. What I'm trying to say is twofold:

- The presence of a GC doesn't guarantee memory safety since there are sources of memory unsafety that GCs don't/can't cover (i.e., escape hatches and/or FFI), not to mention the possibility of bugs in the GC implementation itself.

- The absence of a GC doesn't preclude memory safety since you can "just" refuse to compile anything which you can't prove to be memory-safe modulo escape hatches and/or axioms and/or FFI (and bugs, unfortunately). Formal verification toolchains for C (Frama-C, seL4's setup, etc.) and Ada/SPARK's stricter modes are good examples of this.

In the case of Rust:

- `unsafe` blocks (or at least their precursors) were added in 2011 [0].

- Rust's reference-counting GC was removed in 2014 [1].

That's why I think "ripped out their GC" is a bit of a non-sequitur for "got rid of the safeties". Rust wasn't entirely safe before the GC was removed because `unsafe` already existed. And even after the GC was removed, the entire point of Rust's infamous strictness is to reject programs for which safety can't be proved (modulo the same sources that existed before the GC was removed), so the removal of GC does not necessarily imply losing memory safety either.

[0]: https://github.com/rust-lang/rust/pull/1036

[1]: https://github.com/rust-lang/rust/pull/17666


Ah. I believe pretty much every safe language on the planet constantly has bugs in the implementation that can be exploited to cause unsafety. Sometimes they even get CVEs, e.g. in JavaScript VMs.


I don't do Javascript, but any self-respecting language which calls itself safe is actually safe. I worked for decades in actually memory and type safe languages, and never ever heard of a memory or type safety bug.

Just not the cheaters: Rust, Java (until recently), and of course Javascript with its unsafe implementations.

Memory safety bug in a proper lisp? Unheard of, unless you break the GC or do wrong FFI calls.


You've made it clear from this thread that you have no idea what you're talking about. Please do not waste our time by commenting on this topic further.


Ha, I did maintain two safe languages. How many did you?


Huh? It doesn't follow that forbidding blocking IO is either necessary or sufficient for concurrency safety, at least under any definition of "safety" I can imagine. What do you mean? You mean async-not-blocking-event-loop stuff? That's not the only way to do more than one IO at a time.


Non-blocking IO is only one part to provide concurrency safety. Process locks are even worse. All locks are forbidden to avoid dead-locks.


What’s wrong with printers? Imagine designing a laser that bounces off of mirror spinning at 20k+ rpm while coordinating with a paper feeder. Sounds pretty cool to me


I love printers, I hate ink cartridges


I love printers and ink, I hate the people that make the experience worse in hopes of making more money.


This is a CC harness thing than a model thing but the "new" thinking messages ('hmm...', 'this one needs a moment...') are extraordinarily irritating. They're both entirely uninformative and strictly worse than a spinner. On my workflows CC often spends up to an hour thinking (which is fine if the result is good) and seeing these messages does not build confidence.


There’s one that’s like “Considering 17 theories” that had me wondering what those 17 things would be, I wanted to see them! Turns out it’s just a static message. Very confusing.


On the leaked codebase they show 100+ messages that are randomly cycled through


"Reticulating Splines"


Maybe there are literally 17 models in an initial MoE pass. Seems excessive though.


The comment section is already long, but I knew that I could found comments about "hmm" that I started noticing. Yes, it is so irritating to me too. Also, one additional thing I noticed was that verbose information has been more and more being obfuscated. I run CC with --verbose option for months, and I can see verbose mode is not verbose anymore. I wish I can do -vvv maximum verbose mode.


Sounds really minor, but was actually a big contributor to me canceling and switching. The VS Code extension has a morphing spinner thing that rapidly switches between these little catch phrases. It drives me crazy, and I end up covering it up with my right click menu so I can read the actual thinking tokens without that attention vampire distracting me.

And of course they recently turned off all third party harness support for the subscription, so you're just forced to watch it and any other stuff they randomly decide to add, or pay thousands of dollars.


I used Gemini CLI for a while because it was free to me. The primary reason I stopped was because it wasn't very good, but their "thinking summaries" didn't help matters. They were model generated and just said things to the effect of "I'm thinking very hard about how to solve this problem" and "I'm laser-focused on the user objective". So I feel you: small things like this make a big difference to usability.


I'm not sure if this is official, but from what I gathered, they just bill 3rd party stuff as extra usage now:

https://news.ycombinator.com/item?id=47633568

(They were against ToS before (might still be?), and people were having their Anthropic accounts banned. Actually charging people money for the tokens they're using seems like a much more sensible move.)


Yes, but I got a subscription because I was tired of alt+tabbing to the Cursor spending dashboard between prompts to make sure I wasn't over spending. I'm ok if they slow me down for a few hours during peak usage. But getting cut off for 20+ days because I'm not thinking about the prompt cache for a bit makes a subscription feel pretty useless.

I was using it with Zed before, because I guess I'm one of the only programmers who doesn't just full vibe, which seem to mean I'm not the target customer for a lot of these companies who seem to be going all in on the terminal interfaces.

I've gone back to Cursor auto the last few weeks, it hasn't been too bad actually, I haven't managed to run out of the $20/mo plan yet.


Could you say more about your workflow? I don’t think I’ve ever gotten close to an hour of thinking before. Always curious to learn how to get more out of agents.


I don't think it's something special about my workflow and more the application area--I'm writing a lot of Lean lately and particularly knotty proofs can take quite a lot of time. Long thinking intervals are more of a bug than a feature IMO: Even if Claude can one-shot the proof in 40-60 minutes I'd rather have a partial proof in 15 and fill in the gaps myself.


It wouldn't be so irritating if thinking didn't start to take a lot longer for tasks of similar complexity (or maybe it's taking longer to even start to think behind the scenes due to queueing).


Agreed. I actually have thought those were “waiting to get a response from the API” rather than “the model is still thinking” messages


It is the new "You are absolutely right!"


Is there any word on whether these vulnerabilities were exploitable on devices with MIE[0]?

[0]: https://security.apple.com/blog/memory-integrity-enforcement...


If Apple won't disclose, we'll need to wait for public PoCs for testing on MIE-enabled devices.

Relatedly, did Apple baseband have similar vulnerabilities as Broadcom WiFi/Bluetooth baseband?


I know this is grumpy but this I’ve never liked this answer. It is a perfect encapsulation of the elitism in the SO community—if you’re new, your questions are closed and your answers are edited and downvoted. Meanwhile this is tolerated only because it’s posted by a member with high rep and username recognition.


As someone who used to write custom crawlers 20 years ago, I can confirm that regular expressions worked great. All my crawlers were custom designed for a page and the sites were mostly generated by some CMS and had consistent HTML. I don't remember having to do much bug fixes that were related to regular expression issues.

I don't suggest writing generic HTML parsers that works with any site, but for custom crawlers they work great.

Not to say that the tools available are the same now as 20 years ago. Today I would probably use puppeteer or some similar tool and query the DOM instead.


I would distinguish between parsing and scraping. Parsing really needs a, well, parser. Otherwise you’ll get things wrong on perfectly well formed input and your program will be brittle and weird.

A scraper is already resigned to being brittle and weird. You’re relying not only on the syntax of the data, but an implicit structure beyond that. This structure is unspecified and may change without notice, so whatever robustness you can achieve will come from being loose with what you accept and trying to guess what changes might be made on the other end. Regex is a decent tool for that.


An interesting thing is that most webpages are generated using text templates. There's some text processing like escaping special characters, but it's mostly text that happened to be (somewhat) valid HTML.

So extracting information from this text with regexps often makes perfect sense.


I think this answer was tolerated when SO wasn't as bad as it is now, and wouldn't be tolerated now from anyone.


It's because SO at the time was a small high-trust society where "everyone knew each other" and so things flew back then that wouldn't fly now.


This is arguable for HSBC (in the UK at least). Ringfencing laws post 2008 have made customer deposits in the UK very difficult to invest profitably, to the point where (at least last time I cared about this) they were charging commercial customers to have UK domiciled accounts.


> Ringfencing laws post 2008 have made customer deposits in the UK very difficult to invest profitably, to the point where (at least last time I cared about this) they were charging commercial customers to have UK domiciled accounts.

I don't follow; why would regulations on consumer accounts change the price of commercial customer accounts?


Small businesses accounts were/are also subject to ring fencing, and my recollection is that large banks sought to recover the costs of ringfencing rules via charges on large clients.

Come to think of it this was all also at the time of very low rates which was more likely to be the issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: