Hacker Newsnew | past | comments | ask | show | jobs | submit | wmf's commentslogin

There were plenty of apps that relied on implementation quirks.

They mostly relied on OS/Toolbox implementation quirks though, not hardware implementation quirks, because applications that relied on the latter wouldn’t run on the Macintosh XL and that mattered to certain market segments. (Like some people using spreadsheets, who were willing to trade CPU speed for screen size.) Similarly anything that tried to use floppy copy protection tricks wouldn’t work due to the different system design, so that wasn’t common among applications.

So even things that wrote directly to the framebuffer would ask the OS for the address and bounds rather than hardcode them, copy protection would be implemented using license keys (crypto/hashes, not dongles) rather than weird track layouts on floppies, etc. It led to good enough forward compatibility that the substantial architectural changes in the Macintosh II were possible, and things just improved from there.


Eh, there were plenty of games that were coded for a particular clock speed, and then once the SE came out, had an update that included a software version of a turbo button, let you select which of two speeds to run at. They run FAST on an SE/30 or Mac II and unusably fast on anything newer.

I didn’t encounter too many of those back in the day, I think because there was the VBL task mechanism for synchronizing with screen refresh that made it easy to avoid using instruction loops for timing.

Much more common in my experience was the assumption that the framebuffer was 1-bit, but such games would still run on my IIci if I switched to black & white—they’d just use the upper left 3/4 of the screen since they still paid proper attention to the bytes-per-row in its GrafPort.

Could be that by the time I was using a Mac II though that all the games that didn’t meet that minimum bar had already been weeded out.


Yeah, there were a bunch of floppy games which only ran on an original Mac or maybe a Plus. No go with even my Mac SE.

Out of curiosity, what app are you thinking of? Of all of types of software used with classic Mac OS (INITs, CDEVs, FKEYs, Desk Accessories, Drivers, etc.), apps would be the least likely to rely on implemention quirks.

Macintosh Common Lisp - at least the versions floating around Mac Garden and such - seems to refuse to run on anything besides accurate emulators and real hardware.

It takes 4-6 years to design a CPU so yes. Keep in mind 128 GB is the maximum; most laptops will ship with 16-32 GB.

It sounds like they are already doing you a favor so I wouldn't ask for higher pay on top of that.

What's the deal with Antirez and PHK refusing to add TLS support?

I'm not "refusing to add TLS support" I insist that the certificate is safely isolated in a separate process for security reasons. There are many ways to skin that cat.

Aside: Loved your bit talking about money and varnish in Gift Community[1]. And thanks for the Beerware License, I've started using it!

[1]: https://www.youtube.com/watch?v=tOn-L3tGKw0


Varnish Enterprise has https support.

the whole point of varnish software keeping a public version of "vinyl cache" as "varnish cache" with TLS is to give people a way to access a FOSS version with native TLS.

I think TLS is table-stakes now, and has been for the last 10 years, at least.


just use the tool that does the job.

TLS in -> hitch or caddy Cache -> varnish/vinyl TLS out -> haproxy

Connect them up with Unix sockets, if you like.


because the topic keeps coming up, I now wrote the tutorial which we should have had years ago: https://vinyl-cache.org/tutorials/tls_haproxy.html

Terminate tls and you have your cache.

Not the original poster but I do have some ideas. Official Bluesky clients could randomly/round-robin access 3-4 different appview servers run by different organizations instead of one centralized server. Likewise there could be 3-4 relays instead of one. Upgrades could roll across the servers so they don't all get hit by bugs immediately.

If multiple personal data servers (pdses) share the same set of posts how would we guarantee that they are tamper resistant to third parties?

PDSes should be sharded not replicated. Your posts live on your PDS which lives in one place (although it can move).

What's stopping us from doing both?

Cost and complexity tradeoffs. IMO the relay/appview is the current bottleneck.

This is why I'm hoping fiatjaf has a recommendation here. I have a feeling he might have a proposal that solves this. But doesn't solve all of it, just some of it.

They're property which is also illegal to steal.

Those people aren't the ones doing the work though.

you could call native APIs from JavaScript or Java say, then in your world that's a "native" application because it uses the APIs the platform provides

Yes, this is what we want.

an application could be implemented with Objective-C and/or Swift but not use Cacoa/AppKit/SwiftUI APIs, then that's not an native application

Correct. The toolkit matters, not the language. Native toolkits have very rich and subtle behavior that cannot be properly emulated. They also have a lot of features (someone mentioned input methods and accessibility) that wrappers or wannabe toolkits often lack. To get somewhat back on topic I notice and appreciate that Xilem mentions accessibility.

games written with Vulkan/OpenGL aren't "as native"...

Games are usually fullscreen and look nothing like desktop apps anyway so it doesn't matter what API they use.


Zuck has a lot more experience being summoned before Congress than you.

This may be too large to run locally anyway. Maybe they will distill down some smaller open versions later.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: