I've been thinking about this a lot. In terms of political economy, there's a lot of reliance on the ideas of information, free will and rationality.
To cut a long story short, if people have information about the market, they can use their knowledge of their own desires to act, which creates some sort of efficiency. Glossing over a load of stuff here because it's a large subject, but that's sort of the spiel you get when people are talking about all sorts of political/economic decisions. And you can generate both leftie and rightie policies depending on how you think about things.
The problem is if deciding things depends on free will and information, what do we do when it seems there are exceptions? I'll leave aside information problems since we're more talking about rationality.
But if it turns out people can know what WoW does to your studies, and they still end up doing this, how should we think about this? Easy answer is that their real desire is to play WoW all day, but that doesn't seem very satisfying. If they're making an irrational decision, that has other consequences on our society.
For instance, a whole load of behaviours need to be controlled. Obvious things like smoking and drugs, but what about overeating? Too much soda? Social media? And what exactly is the correct policy response to irrational behaviour? Keep in mind solutions like taxes/subsidies also rely on rationality.
Our entire idea of "agency", "individuality" and even "freedom" needs to be re-thinked. Basically all of what we have built resides in one way or another on the idea that individuals bar exceptions such as with physical drugs have some sort of inalienable agency to them. That entire rockbed is simply not really true, and more like highly idealized thoughts from thinkers of the 1800's
Policy makers and politicians have been struggling with these problems for the last +150 years, and increasingly so in the last 30 ever since the creation of political microtargetting and the perfection of advertising, cambridge analytica and digital microtargetting+psychographics is just the cherry on top
Planet Money released an episode touching on all of these subjects "Ep 915 How to meddle in an election"
Yuval Noah Harari has thought this through pretty well, it seems to me. At least about why it is a problem, not necessarily with the answer. He talks about this in all his books. For intro on his thinking on this issue, see this article:
https://www.theguardian.com/books/2018/sep/14/yuval-noah-har...
With all due respect to Harari's historical analysis, he seems to make some logic jumps that are unacceptable in a serious philosophical discussion. From the article you cited: "'free will' isn’t a scientific reality. It is a myth inherited from Christian theology." This is a false dichotomy. Free will is still a deep philosophical question. When I read his "Homo Deus", the lack of soundness of his challenge on liberalism seemed to contrast with the quality of the rest of the book.
I'm not so sure that's a false dichotomy. There are plenty of elements of Christian myths that are considered 'scientific reality', such as people and fruits and the fact that the Romans crucified people. There are also elements that are not. However, there are also influences on our culture that originate neither in science nor in Christian theology.
I think you interpret that passage to mean: "All influences must be from science OR religion, but not both. It is from religion. Therefore, it is from science." However, I interpret it more along the lines of: "Science has not backed up this idea of free will. So, how did we get this idea in our culture, you might ask, if it's not real? Well, it is inherited from Christian myth." This second reading is not an argument (he's not proving why free will is not a scientific reality), but an explanation of why the idea of free will exists.
You may still not be convinced by his assertion, but I don't think it's a false dichotomy. Additionally, he may make such assertions in his work Homo Dues (which I have not read), but I don't think this is necessarily an example of fallacious reasoning.
The problem I see is: fact a) free will isn't scientific; fact b) Christianism depends on free will to make sense; fallacious conclusion) liberalism inherited the notion of free will from Christianism.
There are many philosophical lines that do not base free will on divine punishment.
Now imagine for a moment that you're not better than everyone else at thinking for them.
Agency is the only option. It's not like the concept of agency was thought up in a society with no dumb people.
If you want to live in a society where the state doesn't believe in agency, and where your difficult decisions will be made by people who are (on average) more intelligent than you, go ahead and move to the PRC, they're right up your alley!
People are actively opting out of advertising, which is why major advertising businesses are pulling out all the stops (see Google's recent announcement that they will restrict advertisement and tracking blockers); these businesses will suffer with the new generation of "consumers".
This is exactly the problem: the fallacy of self exclusion on the part of those who would rule or otherwise think for us.
If humans are all vulnerable to all these manipulative tactics, where are we supposed to get humans who are not?
There is another alternative: reclassify exploitation of cognitive failure modes as a type of initiation of force. We already do this with deception in certain areas like con artistry, giving false testimony, bad faith in business, etc. Maybe we need to vastly expand our notion of tortious or civilly actionable deception to include clear use of dark patterns.
The idea that we are perfectly rational and can deal with any form of deception is perhaps the intangible analog of saying we are impervious to bullets. If that's not true, then it is the responsibility of law enforcement to protect us.
This is kind of radical of course. It would lead to regulations on speech that classify certain kinds of speech as assault.
The more I see of this emerging mass mind control dystopia the harder it is for me to think of non-radical solutions.
> ...reclassify exploitation of cognitive failure modes as a type of initiation of force.
Yeah, I think there's some room for that, and the law (as you point out) is already somewhat headed that way. I think we need to be careful also not to write laws we can't afford to enforce fairly, or read too much into people's private interactions.
I remember having to help my employer prove that they were not the reason I wasn't in school when I was 17; and I think even with that they were still at my mercy. As a result of this, and a pretty large number of other experiences, I'm very very sensitive to the unintended outcomes of the law, and the limits of legislators' ability to understand what precisely they are doing.
What you mention is already the case, it is fully within the interest of the decentralized power holders and policy makers for them to control the population via the information that's fed to them, that's why the Iraq invasion happened in the first place, that's why the US continues to try spread fear about Iran or China.
There is no way to destroy that machinery, only repurpose it for different means
>Now imagine for a moment that you're not better than everyone else at thinking for them.
I didn't read that in GP. Currently we seem to be very reluctant to admit that we're all vulnerable to being manipulated, and it might actually increase the total amount of agency available to individuals if we could restrict the ways we're allowed to manipulate each other.
The problem is the line between preventing manipulation, and imposing on people's private thoughts.
Poorly-conducted industry-funded research? That's pretty straightforward; and in some sense it is not that different from other forms of fraud. I'm okay with punishing people for acting with intent to knowingly mislead.
On the other hand, you look at lending, and it becomes a little bit difficult. There are illegal interest rates in many places, and for setting an upper bound, that works fine; but people throw around a lot of dubious allegations about lenders.
At some point, if you deny the existence of agency, you end up telling people things you can't know, because you don't have the imagination to consider that they might know what they're doing.
I didn't intend to deny the existence of agency, only to point out that it's often unclear exactly how much agency we have in different situations, and it could be useful to acknowledge that this is a thorny subject.
This is the problem, agency is dead, or at least in the grand scales, manufacture of consent has been a thing for a long, long, long time now and it won't go away simply because some people are afraid of looking into the reality of things
But we frequently are better off having others think for us. There are a broad spectrum of examples, from research that we don't have the time or resources to accomplish, to, say, getting advice from a friend, while in a crisis, that we fail to follow even though we would give that same advice.
Voluntary delegation is different than involuntary delegation.
This is a hard thing for people to get over, and I don't really understand why. As an analogy, think about Right to Repair: I can simultaneously say that I want consumers to have the ability to control their devices, install custom firmware, and crack them open to repair them if they break, without making an assertion that every single consumer is qualified to do so.
With Right to Repair, I don't lose the ability to let someone qualified make decisions about my device, or to set device policies. It doesn't require a massive amount of faith in ordinary people, just an acknowledgement that a single company making those decisions for everyone is overall bad for the market.
And the fact that some individuals will make bad decisions and hand their iPad to Uncle Joe to fix with his power tools is an acceptable tradeoff, given that it means we're not forced to solve the much harder problem of deciding for everyone which modifications and usage is "correct" for every device.
The reason people go in so hard on individual freedom isn't because individuals are amazing -- individuals are flawed as heck. It's that it is extremely difficult to figure out when someone is fit to take over someone else's decision power, and that getting it wrong comes with massive risks. Given that we don't have an easy formula to tell when delegation is the right choice, or even to define what happiness is, or even to define whether or not happiness should be a universal goal, we (when possible) leave it up to individuals to make their own decisions or voluntarily delegate their decisions to another person.
Individual freedoms allow us to practice Defense in Depth on a societal level, at the cost of making us more vulnerable to some manipulations and allowing individuals to make some bad decisions about who they delegate their thinking to.
It's not perfect, but it is terribly practical, and nobody that I know of has come up with a better scheme yet.
>Voluntary delegation is different than involuntary delegation
I mean, this is 100% correct, yet that's not quite the problem, and stating it that way leads to overall confusion, the issue is that policy makers, companies and other profit motivated groups found ways to get inside the root unconscious mental proceses which lead to concious decision making, this just means that said groups can hijack the agency of thousands of people at the same time with the same set of tools. That is what's happening right now, it had been the case that groups were using mass media to manufacture consent, but this new machinery is far more efficient and less expensive than previous systems
The defense in depth analogy is indeed very good, yet the agency of a critical amount of people has been compromised by this new machinery, and this machinery preys on people's unconscious instincts, fears and epigenetic predispositions. Those are things which simply can't really be postdoc taught to hundreds of millions of people at a whim
> this just means that said groups can hijack the agency of thousands of people at the same time with the same set of tools.
Yeah, very good point.
What we're trying to get at is this idea that some of the voluntary decisions people make aren't really voluntary.
What's (sometimes) proposed though is that this somehow invalidates the idea that people should be able to make their own decisions at all -- that they're incapable of it.
To go back to the idea of people controlling their devices, a hacker might trick me into giving device permissions for something I don't want -- because I misunderstand the permissions, or because they have some kind of leverage over me, or because they convince me that the permissions are no big deal.
I'm all for efforts to address 'Limbic Capitalism' that involve educating consumers and trying to plug those kinds of holes and giving people ways to interact with brands where brands can't sidestep informed decision-making processes. I disagree with someone who looks at those holes and says, "well clearly freedom isn't working", for the same reasons I disagree with someone who looks at Android's horrible permissions system and says, "well, clearly users shouldn't be able to decide whether or not an app gets location privileges, clearly only the smart people at Google should."
On the software side of things, we've learned that things like upfront permission prompts on app installation circumvent user agency, and we've started to move more towards forcing apps to only ask for permissions at the moment they need them. We've also given users tools like the ability to grant a permission temporarily rather than permanently.
Those kinds of solutions help defend against exploitative behavior without robbing users of agency.
I think that the only hope remains in the newer generations, and try to taught them as good as possible of the dangers this newly opened Pandora box released...
I studied international relations and this kind of thing keeps me up at night, I know that the Cia and other intelligence agencies are and have been exploiting this for a long, long, long time, and that there is no true defense against this problem other than "making it illegal" because sure as heck that didn't stop them from doing anything ever..
Anyhow, thank you for being around and help bounce ideas around!
But it's not just a hard problem we can solve once, it's a wildly difficult problem that we haven't been able to solve for thousands of years.
I brought up the Defense in Depth angle because I think it's kind of the same category of problem. Why do we need multiple layers of security on our devices -- why shouldn't Apple just validate the code on everything that gets submitted to the store? Why shouldn't developers just write code that's bug free?
Because they just can't.
It's not that everyone has stopped trying to answer questions like "what is the purpose of society?". Nobody's talking about abandoning anything. If you think the problem is solveable, you're welcome to jump into philosophical circles right now and solve it. And once you've solved it, I promise I will happily move over to a totalitarian system where a perfectly informed formula decides what my life should be.
In the meantime, we need a pragmatic solution, and individual freedom is the best one I've seen proposed.
Okay, sure. But what does the difficulty of taxing something have to do with the difficulty of building a single central authority that makes good decisions for a population without their consent?
Passing laws is not the difficult part, any government can do that :)
Right -- in most cases through a voluntarily elected representative democracy, or something similar. In the US, a decent portion of these taxes are even imposed by individual state laws, not just through the federal government. This is, maybe not the complete opposite of totalitarianism, but it's pretty close to the opposite.
But you might be missing the point of what I'm saying. If you want to move to a totalitarian system where we centralize all decision-making power, it's not enough to find a couple of things we all agree on. You have to be right nearly all of the time.
Go back to the Right to Repair analogy I made earlier. There are generally accepted best practices for most devices (for example, I don't want to be able to overcharge the battery until it explodes). But just because we can think of a couple of examples of this, it doesn't necessarily follow that a single company should control everything I can do with my device. Why? Because my concern isn't that Apple or Google will never get anything right, it's that their priorities won't in general align with mine, and that if something goes wrong, I won't have a recourse to correct the problem (The Defense in Depth analogy).
The same concerns are present with any system of government. If you want to remove user/citizen agency, you need a consistent framework or set of tests that can be applied to every decision you make. You need a way to make a decision and, without getting the consent or input of the general populous, figure out whether it's in their best interests. That's the difficult, impossible task.
Not passing the law; not quieting the companies that opposed it. That stuff might not be trivial, but we know how to do it. We do it all the time. We don't have any idea of how to objectively determine whether or not a random policy is in the public's (or an individual's) best interest without their input.
And it's worth noting that even in drug policy, even representative democracies struggle with this. In the US, we had prohibition, which was an attempt to say, "come on, alcohol is obviously bad for society". That failed pretty spectacularly. Now we do cigarette taxes, but we haven't gone so far as to actually ban cigarettes, even though we're all pretty much agreed it would be better if they stopped existing. And more recently, we're starting to see states decriminalize marijuana, which is a clear example of citizens/states rejecting a centralized "common-sense" decision about what was good for them.
Corrections like this are examples of the system working as intended.
You're looking at this through a government vs. corporation regulatory lens that I'm not really following. I'm arguing that voluntary delegation of a decision making process is different than involuntary delegation, and that preserving individual freedom doesn't mean you can't ever delegate a decision to someone else. I don't understand how cigarette taxes relate to that idea.
I think that what you are missing is that this is not an option, this is a reality, Cambridge analytica already happened and since then they have moved into manufacturing consent in many other countries elections
For a long time, marketing and political propaganda have gone hand in hand
Manufacture of consent is something that has been happening for a long time now
The difference now is that it can be done far cheaper, and on a bigger scale and more effectively than ever before, I would indeed recommend you to check out the planet money podcast at the least
Lastly, just because you see the value of mental agency and thought liberty, it doesn't mean that other people will value it the same way you do, or as much as you do. Psychographics is also a thing, and used to great effect when it come to manipulating people this way
You’re presenting a false choice here. Either we live with free market limbic capitalism, or we live with a totalitarian PRC like state. There are other options that mitigate some of the issues with limbic capitalism without being totalitarian
The problem is that when people stop believing in agency and will, everything gets worse. They're not a perfect model of things, but imposing on society a single attractive view on some pet issue of yours is almost always worse than letting people get it wrong.
Private society is actually getting better at avoiding misuse of food, drugs, and communication platforms; my young sister is well aware of a lot of the pitfalls, and monitors her use of social media rather carefully, she watches her food intake carefully and asks interesting questions about the effects of certain foods. In general, I have high hopes for the next generation, I think they will figure this stuff out fast enough.
>if it turns out people can know what WoW does to your studies, and they still end up doing this, how should we think about this? Easy answer is that their real desire is to play WoW all day, but that doesn't seem very satisfying.
It's also wrong, as it makes people unhappy. The whole idea of a real free will breaks down if we are looking at substance abuse at the latest. The underlying machinery of our brains limits the rationality of our decisions, and ultimately the freedom of our will.
I am afflicted with ADHD and know this first-hand. I still managed to get an education and a qualified job in software, but for years I hated myself for not being able to simply do tasks at hand. Libertarians would say I value goof-off time on reddit more than a steady job, but that's not true, in fact I would be a happier person if the internet just vanished or I finally got my procrastination under control.
Not OP, but for me Vyvanse and exercise really helped get things more under control. Environment also matters. Open plan offices are my kryptonite. My current office is open, densely packed, and loud, so I'm 100x as productive at home.
Also, it helps to realize that high stress, crunch time situations are where I thrive. Major outage bringing the company to its knees? I got this. I'm calm, even. Tight, almost impossible deadline? I work extremely quickly and deliver high quality results.
Simple bug that's super easy to fix with no pressure? I'll need three weeks to do it. 1 minute to fix it, the rest of the time for unit tests. I get distracted more easily during these times.
On the plus side, my tangents sometimes pay off. I get curious, add instrumentation, discover relationships between various metrics, and use that to save millions of dollars, sometimes tens of millions.
On the other hand, ADHD also has some emotional side effects and I thus have a higher chance of pissing people off without meaning too. I also score higher on various autism tests, so that could be a factor (though I've never been diagnosed). So my (sometimes) good work is balanced by a weaker ability to collaborate.
I will not lie, one of the important factors are that I hail from Europe, so getting treatment and seeking therapy was never a financial problem. ADHD, anxiety, and depression are frequent co-morbidities since ADHD sufferers have to face "you are smart, but you are wasting your potential" all their lives and are helpless to change it.
Exercise helps, if I am able to do it regularly and frequently (which is currently not the case due to family and health). I tried medication but Ritalin would make me anxious and various SSRI/SNRI would nuke my sexuality. I am rather strict in not using my smartphone in bed, as that wrecks my sleep patterns.
Apart from that, not much, I am afraid. I tried various systems to organise my work, from having a to-do list buddy (I would regularly tell them what I have to do and report back) to pomodoro, but in the end, my discipline is too lacking, and I get distracted too fast.
I stopped working self-employed and am now working for one of the top 100 biggest companies. That helped a lot since I can sweep procrastination episodes under the carpet (the expected intensity of work is rather low), but is insanely frustrating because I have low leverage to improve our way of work.
ADHD has the upside for many that they will absolutely rip into information and devour it as long as it is interesting. In my case, that means I am the "go-to" person in our department because I know more about software development, tools, languages than most, even if I am probably not the most productive coder. Currently trying to develop into some kind of internal consultant for digital transformation/disruptive business ideas - I seemingly have the right ideas, now hoping for someone higher up to notice them.
Behaviour, especially widespread behaviour, is rarely irrational. Despite its reputation Sweden has become surprisingly unpragmatic in recent years. It is very easy to end up in a situation where you can't continue your studies, but you can't also enter the job market effectively.
I absolutely think things like loot boxes, sugar and gasoline should be regulated or taxed to decrease their potential to do damage. But at the end of the day you have to provide positive opportunities for people to do the productive thing.
I think systems have become increasingly unpragmatic, but not necessarily people's behaviour. In 2010, which the article refers to, youth unemployment in Sweden was ~25%, the housing market had become a mess already and various rules for student loans and unemployment had changed. So it isn't like they had a fantastic future just waiting for them.
This is certainly a difficult problem, and I think that as we learn more on this subject people are going to be increasingly uncomfortable with the answers available to us and what it means for the products we create.
What I feel is that, regardless of regulation on a societal level, it is up to us as individuals to decide what we feel comfortable with in regards to the products we create and the risk of harming others that we are willing to take on. We should be able to acknowledge the fact that - regardless of whether or not addiction is a personal failing or an inescapable biological flaw - intentionally leveraging and profiting off of it is an unethical action. It is exploitation regardless of the nature of the flaw being exploited.
It's complicated, because I think there are many types of products that can't avoid the potential for addictive use. In those situations, we have to ask ourselves whether we're targeting that addictive use, or whether it's an undesired side effect that a user may come across - and in that latter situation, to find ways to mitigate those harmful outcomes. If we want to provide outcomes that help our customers and provide them value, we need to be willing to accept that we may have to create awkward-feeling and profit-limiting mitigations.
Flawed as it is, education and the promotion of people developing self-awareness about their addictions is a strong tool we have for mitigation as long as it isn't something buried where users won't easily find it. Setting limits or ceilings on spending (spending of both time and money) is another mitigation. Avoiding monetization models such as gambling mechanics that inherently exploit addiction is another. Promoting and sticking to direct sales of products for discrete costs is another. But there is still so much we need to learn, and many potential pitfalls.
To the detriment of people demanding their "freedom", it turns out people don't always make the best choices for themselves, and for their societies at large.
Maybe we can take the lessons learned in adtech et al., and apply them to governing irrational people.
in this world how does the government decide whats the best choice is? one of the reasons we ended up with 'people should decide for themselves unless it harms someone else' is that we dont have any other basis to evaluate options. the church, the monarchy, and the market have been 3 other go-to deciders. maybe a 1970s style AI with flashing lights?
Could you please not post ideological rants here? and also, please don't use uppercase for emphasis. Both of these things are covered by the site guidelines. If you'd review them and follow them, we'd be grateful.
Unfortunately your reality impacts mine. I am not trying to be benevolent, just trying to limit the devastating impacts your supposed freedom has on my planet.
To cut a long story short, if people have information about the market, they can use their knowledge of their own desires to act, which creates some sort of efficiency. Glossing over a load of stuff here because it's a large subject, but that's sort of the spiel you get when people are talking about all sorts of political/economic decisions. And you can generate both leftie and rightie policies depending on how you think about things.
The problem is if deciding things depends on free will and information, what do we do when it seems there are exceptions? I'll leave aside information problems since we're more talking about rationality.
But if it turns out people can know what WoW does to your studies, and they still end up doing this, how should we think about this? Easy answer is that their real desire is to play WoW all day, but that doesn't seem very satisfying. If they're making an irrational decision, that has other consequences on our society.
For instance, a whole load of behaviours need to be controlled. Obvious things like smoking and drugs, but what about overeating? Too much soda? Social media? And what exactly is the correct policy response to irrational behaviour? Keep in mind solutions like taxes/subsidies also rely on rationality.