Hacker Newsnew | past | comments | ask | show | jobs | submit | Zak's commentslogin

This assumes a high level of technical skill and effort on the part of the stalkerware author, and ignores the unlocked bootloader scare screen most devices display.

If someone brought me a device they suspected was compromised and it had an unlocked bootloader and they didn't know what an unlocked bootloader, custom ROM, or root was, I'd assume a high probability the OS is malicious.


A computer that can run arbitrary programs can necessarily run malicious ones. Useful operations are often dangerous, and a completely safe computer isn't very useful.

Some sandboxing and a little friction to reduce mistakes is usually wise, but a general-purpose computer that can't be broken through sufficiently determined misuse by its owner is broken as designed.


Open source helps, but if you didn't build it yourself, you'll need to trust whoever did. F-Droid reproducible builds help in that you only need to trust either F-Droid or the developer, not both.

The browser tends to be safer because it has a stronger sandbox than native apps on a mobile OS. It's meant to be able to run potentially malicious code with a very limited blast radius.


> Open source helps, but if you didn't build it yourself, you'll need to trust whoever did.

You need to audit the code. If you are not capable of doing that, you need to trust someone to do it.


Also even obfuscated JS code is easier to understand than machine code, if you're trying to tell what some non-open-source thing is doing

That, but with a little more ceremony. It gets treated as a separate app by mobile OS app switchers and doesn't show the browser's chrome or other open tabs.

https://en.wikipedia.org/wiki/Progressive_web_app


Android modes provide control over notification display.

Modes control which people and apps can trigger a sound/vibration, but also offer the option to hide the silenced notifications from the status bar, pull-down shade, and dots on app icons. I hide them from the status bar, but not the pull-down shade so that I can manually check if I want to, but don't see them at a glance.

I'm not a heavy user of this feature though; I mostly don't install apps that have spammy notifications.


Right. I'm saying that living for a week without any notification bar at all made me realize that even my usual well-curated notification bar is impacting me much more than I realized.

I imagine usage patterns vary greatly. For me, most of the time, I have it set to only allow messages from contacts, and I usually handle those immediately.

Pointers are famously difficult to learn and reason about even though the basic principles are simple. Programming in a style that requires direct manipulation of pointers when it's not actually necessary is usually regarded as unwise because it's so hard to get right.

OP had no problem with pointers prior to trying C++. I think there is a case to be made that C(++) makes pointers unnecessarily confusing and there is no real disconnect between understanding pointers in theory and in practice otherwise

And C++ makes everything extra confusing with the capability of operator overloading.

That has to be one of the worst features ever added to a language.


> C++ makes everything extra confusing

> I head massive problems with this pointer stuff

no, OP explicitly had problem after getting introduced to pointer concept


Pointers aren't hard, it's C/C++ that make them complicated. Addresses and indirection in any assembly language are simple and straightforward, easy and even intuitive once you start actually writing programs.

C and C++ pointers aren't any harder than pointers in assembly, at least as far as novices complaining about pointers being hard are concerned.

They are though! Indirection in assembly is just something like:

  ldr dest, [src, offset]
It's straightforward and pretty hard to mess up, and easy to read to because the format is consistent.

Whereas in C all the following are valid (and it becomes even more confusing with assignment in the declaration statement, tons of footguns and weird syntax):

  int* a;
  int *a;
  int a[];
  int a[5];
Assignment is weird too, especially because dereferencing and defining a pointer both use '*'.

  *a   = c;
  a[0] = c;
Then you have structs/unions and their members, and what if those are pointers? You get . and -> syntax. It's weird and complicated, much much more complicated than assembly. That's before you get to casting and types which make C much more complicated than assembly for doing low level stuff.

Tell that to the thousands of comp sci students who drop out every year because they don't like programming in C!

Right, but it's hard to tell how much confusion is caused by C syntax vs the idea of a memory address.

In particular I think people are very confused by declaring pointers and the overloaded meaning of the dereference operator.


I used to think I was incapable of learning "real" programming because I didn't get C. When I later read a book on programming in assembly, I realized that everything that had felt so complex was actually not so difficult. C pointer syntax is weird and doesn't parse naturally for many people, especially programming novices who might not yet have a solid grasp on what/how/why they're doing anything.

...thats the reason why I love managed environments like C#/Java/etc :-))

Phones are perfectly capable of accessing websites. I think a lot of the shift here has to do with companies aggressively pushing apps because apps are more profitable, which in turn trains users to expect apps.

Sorry, there are by faaaar not as much useable mobile websites than crappy mobile websites - most mobile websites are not really optimized, more like "just let us deploy some custome mobile CSS and people will use it" style

Companies with poor quality mobile websites also usually have poor quality apps.

The website can be objectively bad, but still better than the app experience.


I leaned on Claude Code quite a bit resurrecting Clojure on Android[0] and got good results with it. Using the Clojure REPL MCP works especially well for about the same reasons I find developing with a REPL myself important: it can query the running program to see how things work, and test implementations with rapid turnaround.

I wasn't sure if I should expect great results relative to more popular languages with more code for the LLM to train on, but it looks like that's either not a big issue, or Clojure is over the popularity threshold for good results. I also previously expected languages with a lot of static guarantees like Rust to lead to consistently better results with LLM coding agents than languages like Clojure which have few, but that's untrue to the point that "bad AI rewrite in Rust" is a meme.

[0] https://github.com/clj-android


I think when making the claim a company is a net negative, it's necessary to explore what would have happened if the company hadn't been founded.

I find it unlikely, for example that there would not be a dominant centralized forum platform. People would have certainly started problematic communities on the dominant platform, and it's unlikely a platform with strict moderation would have gained dominance before 2015 or so. I do think a dominant player would have been established by 2015.

Do you think whatever you see as harmful about Reddit would not have occurred if the company didn't exist?


This is like saying “that guy would have died eventually if I didn’t murder him.”

The corporate shield for accountability is so annoying in this way. Nobody’s ever responsible for things that they did as human beings.


This comment assumes both that Reddit is harmful and the outcomes were predictable. The former is debatable, but I am sure the latter is not true; the founders of Reddit didn't know what they were building.

They thought it was a social bookmarking thing for people to find and share blog posts. It didn't even have comments for the first half year. For two more years, self-posts only existed as a hack where the poster had to predict the post's ID to make it link to itself. User-created subreddits didn't show up until about 2.5 years after the site launched.


I’m pretty sure all endless scroll social media has been scientifically proven to be harmful. Reddit also runs a 1:1 copy of TikTok.

I don’t really care to defend the morality of extremely wealthy VC firms like YC. They know the enshittification process that happens with 100% of the companies they fund.

They could create companies with charters and ownership structures that ensure they exist to better the world and make good products as their binding guiding principals, but they choose not to.

More fun with this subject: https://theonion.com/sam-altman-if-i-dont-end-the-world-some...


It would have happened more slowly at least, delaying the increase in populism, nihilism and depression in the Western world, the anglosphere in particular.

What traits specific to Reddit as opposed to a hypothetical generic alternative forum platform do you think are major contributors to those social trends?

Recommendation engine pushing users into ideological bubbles, public voting mechanism creating incentive for conformity which then creates purity spirals, lack of moderation.

Early Reddit had a recommended tab, but that didn't last long. The current recommendation features are relatively recent - this decade at least.

It would surprise me if the winner in that space didn't have a public voting mechanism. Digg, Reddit's early major competitor had one, and heavy-handed moderation surrounding the HD-DVD decryption key leak was one of the major inflection points that drove users from Digg to Reddit. Stricter moderation during that time period would have been a losing strategy.


That's mostly imputable to Facebook, Twitter, and Instagram. Reddit is a footnote in the mainstream, which is dominated by those 3.

Given the number of Reddit users across the Anglosphere, I disagree that Reddit is not a major contributor.

I think we're on the same side in principle. The ability for people to interact with the wider world using general purpose computers that they fully control should be sacrosanct, and attempts to interfere with that such as remote attestation, app store exclusivity, and developer verification are evil.

Sandboxing apps by default is not that. The principle of least privilege is good security. If I vibecode some quick and dirty hobby app and share it with the world, it's better if the robot's mistake can't `rm -rf ~/` or give some creep access to your webcam.

The user should be able to override that in any way they see fit of course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: