Not sure if you're aware, but it's the labels, not Spotify:
> It pays roughly two-thirds of every dollar it generates from music, with nearly 80% allocated to recording royalties and about 20% to publishing, though how much artists and songwriters ultimately receive depends on their agreements with rights holders, which Spotify does not control. [0]
Spotify is frantically trying to escape the record label's death grip (hence podcasts), because they know they can squeeze it for just about anything with licensing deals. It's a terrible business model! Spotify keeps a third for their costs (& finally some profit in the past year or two), ie. about the same that Apple takes from App Store for basically nothing[1].
How the record labels convinced the world that Spotify is the bad guy here is beyond belief.
Wow. This is certainly a take. Two things:
1. Spotify has had a policy for a couple years now of not paying artists who generate less than 1,000 streams per year PER song. So if I get 999 streams on each of my 50 songs every year, I get nothing from Spotify.
2. Major labels own major stakes in Spotify. They are one and the same.
> Not sure if you're aware, but it's the labels, not Spotify:
*not only Spotify
They had plenty of problems from people abusing their system to steal listens from actual artists.
Their system is basically "one big bucket of listens" - if your song gets listens, you get money. So if you pay your sub, and listen to say 5 niche musicians only, it still all goes mostly to the most popular songs.
Now you might already notice the flaw here - if you say, make a bunch of bots that just listen to songs to boost their revenue, not only your sub doesn't pay artists you listen, but also to fraudulent ones.
Then there was problems with using fake collaboration tags, AI music to hijack artist profiles, and few others.
> Their system is basically "one big bucket of listens" - if your song gets listens, you get money. So if you pay your sub, and listen to say 5 niche musicians only, it still all goes mostly to the most popular songs.
That's basically how radio is accounted for in royalties, as well.
With Spotify knowing exactly who listened to what, it could be more precise (and arguably more susceptible to the fraud), but tbh what they do is standard (compulsory licensing) industry practice.
With radio, everyone that listens to a particular station is listening to roughly the same mix of songs, and they're "paying" (by listening to ads) on a per-hour basis.
If either of those was true with spotify, the unfairness would go away.
But when different listeners are paying very different amounts per hour, any correlation between payment amount and preferred content causes problems.
Whenever an actual artist reveals their earnings, it’s absolutely pitiful.
A quick search suggests a very steep drop off from the top earners.
‘At 100 million streams, artists can earn approximately $300,000-$500,000 in gross royalties. However, the actual amount reaching the artist varies dramatically based on their contracts. Major label artists receive $90,000-$150,000 after the label’s cut, while independent artists could keep $255,000-$425,000 after distributor fees.’
https://rebelmusicz.com/how-much-do-artists-make-on-spotify/
I think most of us are ending up with a similar workflow.
Mine is: 1) discuss the thing with an agent; 2) iterate on a plan until i'm happy (reviewing carefully); 3) write down the spec; 4) implement (tests first); 5) manually verify that it works as expected; 6) review (another agent and/or manually) + mutation testing (to see what we missed with tests); 7) update docs or other artifacts as needed; 8) done
No frameworks, no special tools, works across any sufficiently capable agent, I scale it down for trivial tasks, or up (multi-step plans) as needed.
The only thing that I haven't seen widely elsewhere (yet) is mutation testing part. The (old) idea is that you change the codebase so that you check your tests catch the bugs. This was usually done with fuzzers, but now I can just tell the LLM to introduce plausible-looking bugs.
No /s here so just in case this is a serious point:
Agile is a set of four principles for software development.
Scrum is the two-week development window thing, but Scrum doesn't mandate a two week _release_ window, it mandates a two week cadence of planning and progress review with a focus on doing small chunks of achievable work rather than mega-projects.
Scrum prefers lots of one-to-three day projects generally, I've yet to see training on Scrum that does not warn off of repeatedly picking up two-week jobs. If that's been your experience, you should review how you can break work down more to get to "done" on bits of it faster.
All good points here (and yeah I didn't add /s, hopefully "now you know!" was sufficiently obvious over-the-top).
All that said, in most orgs I've worked with, they were following agile processes over agile principles - effectively a waterfall with a scrum-master and dailies.
This is not to diss the idea of agile, just an observation that most good ideas, once through the business process MBA grinder, end up feeling quite different.
> All that said, in most orgs I've worked with, they were following agile processes over agile principles - effectively a waterfall with a scrum-master and dailies.
In my experience, they're all waterfall in scrum skin, except they also lose the one thing that was a strength of the old-school method: building up a large, well thought out, thoroughly checked spec up front.
So in the end, "business process MBA grinder" reshapes any idea to adapt to leadership needs - and so here, Agile became all about the things that make software people predictable cogs in the larger corporate planning machine. They got what they need anyway, but we threw away the bits that were useful to us.
I've had an AI assistant send me email digests with local news, and another watching a cron job, analyzing the logs and sending me reports if there's any problem.
I'd say that counts as yes.
(For clarity: neither are powered by Claude Code Routines. Rather, Claude Code coded them and they're simple cron jobs themselves.)
Cumulative total number of deaths from Chernobyl, definitively the worst nuclear disaster in history, ranges from 4000 to 16000 (estimates, via Wikipedia). A dam bursting upstream of a few small towns will kill many more[0].
For comparison the Bhopal disaster (which is much less known in the West) that occurred on 3 December 1984 in Bhopal, Madhya Pradesh, India caused deaths in the range 3928 to 16000.
A government affidavit in 2006 stated the leak caused 558,125 injuries including 38,478 temporary partial injuries and approximately 3,900 severely and permanently disabling injuries.
Most of Europe drinks water from underground aquifers, which could not be affected by Chernobyl. Even breathing with air with radionuclides from Chernobyl in far distance from Chernobyl power plant didn't cause much radiation dose to the population. It was eating contaminated food and drinking contaminated milk that cause most radiation dose for population.
The precise mechanism was: radioactive particles fall to ground, or are washed to ground by rain, which concentrates them on vegetation with a lot of surface especially leafy vegetables, grass. Leafy vegetables are eaten directly by humans. Grass is eaten by cows, which again concentrates the radionuclides in milk. Humans drink milk, eat cheese concentrated from milk.
Not all radionuclides produced in nuclear fission have the same health impact on population in case of a nuclear disaster. To have a significant health hazard a radionuclide needs to have 3 properties: volatility, half-life, bioaccumulativity.
Volatility - some radioactive elements (heavy metals) are not moved far away by air, some radioactive elements like radioactive noble gases dilute very fast.
Stuff with a short half-life will transform into stable elements before migrating far. Stuff with with very long half-life will not produce much radiation during human lifetime.
Bioaccumulativity, radioactive stuff needs to stay in body to do damage. If it's eaten and then pooped out next day it usually doesn't cause much damage.
Most dangerous for general public in case of nuclear disasters are:
Iodine-131 (half-life 8 days): Iodine is stored in thyroid gland and stays in for long time in body. Especially children need a lot of iodine per kilogram of body weight. In regions where there is not enough of iodine in food (lacking seafood, table salt without added iodine), human body will try to get every bit of iodine from environment and hold it in body as long as possible.
Cesium-137 (half-time 30.04 years) : Alkali metal that forms salts. Has tendency to accumulate in soft tissues.
Strontium-90 (half-time 28.91 years) : Chemically similar to calcium. Has tendency to be incorporated into bones, teeth and stay in body for very long time.
Big part of radiation dose to the population could be prevented if the Soviet state didn't tried to cover up the Chernobyl and would prevent people from eating local food and milk, because most of the damage is done by eating iodine-131 in the first weeks after accident. Timely administration of potassium iodide tables would also help.
Chernobyl liquidators were affected with much broader range of radionuclides (radioactive stuff that did not migrate far) and with much high concentrations (radioactive stuff was not diluted much).
Direct deaths: 2 killed by debris (including 1 missing) and 28 killed by acute radiation sickness.
There many estimates about impact of Chernobyl disaster. I think the most comprehensive study is from Chernobyl Forum.
"On the death toll of the accident, the report states that 28 emergency workers died from acute radiation syndrome and 15 patients died from thyroid cancer. It roughly estimates that cancers deaths caused by the Chernobyl accident might eventually reach a total of up to 4,000 among the 600,000 cleanup workers or "liquidators" who received the greatest exposures."
Both are good sources of energy. If you're going to make the argument that "nuclear is unsafe so we shouldn't do it" though, it's relevant to keep in mind that since we've had nuclear power, dam failures have outpaced nuclear by many times in terms of deaths / TwH (1).
Edit to add: Before anyone jumps on for this it's important to note that without the Banquiao disaster the rates are about the same. Still means "nuclear is unsafe" is kind of a red herring.
"In August 1975, the Banqiao Dam and 61 others throughout Henan, China, collapsed following the landfall of Typhoon Nina. The dam collapse created the third-deadliest flood in history which affected 12,000 km2 (3 million acres) with a total population of 10.15 million, including around 30 cities and counties, with estimates of the death toll ranging from 26,000 to 240,000."
"After the disaster, the Chinese Communist Party and the Chinese government remained silent to the public, while no media were allowed to make reports."
"The official documents of this disaster were considered a state secret until 2005 when they were declassified."
> Hmmm, what would I do if giving up the right to veto hinged on my veto power?
If you're like most politicians, you would do what most politicians do - bargain.
For example: agree on veto removal but keep farm subsidies for another X years, or unblock the new "common debt" fund (or enshrine "no common debt fund", depending on which way you lean).
Member states politicians have made far more far-reaching decisions for far less: let us not forget Cameron promised the Brexit referendum to increase his chances of winning an election - and then, fascinatingly, followed through.
As an EU citizen from a small state with little real power in the bloc, I'm all for the replacement of veto with a quorum. I'd not want to see EU deadlocked over any major issue just because any tiny country with the population of a London borough can wield it to settle a score with their neighbour.
Ask any Macedonian what they had to go through for the EU carrot. First they were vetoed by Greece because it didn't like the name. Fine, they changed it. Then they were vetoed by France (which previously was fine with this) because whatever.
Or ask any Ukrainian what they think of essential monetary aid, approved by (representatives of) a few hundred million Europeans, being held hostage by Putin's chum.
Even in less life-or-death cases, there's a lot of really (long-term) damaging horse-trading behind the scenes to wring concessions of everyone because of the veto problem. It's a perversion of democracy.
> If you’ve never read Fred Brooks, I’d recommend it. The aphorism is a bit dated but rings true: you can’t add another developer and make the process go faster.
He didn’t say that. He said adding developers to a late project makes it slower, explained why, and even added some charts to illustrate it. The distinction matters.
By your interpretation, no company should have more than a few developers, which is obviously false. You can argue team organization, but that’s not what Brooks was saying, either.
On top of that, parent never said he hired 40 devs for one project at one time. He was talking in general terms, over the course of years, perhaps in multiple companies.
Finally, let me invoke another aphorism: hours of planning can save you weeks of development. Right here you have the bottleneck staring you into the face.
Of course it’s development. And unless you’re in a really dysfunctional environment, most of that development is coding, testing and debugging, where AI can help a lot.
Obviously SOMETIMES you can add more developers to a project to successfully speed it up, but Brooks point was that it can easily also have the opposite effect and slow the project down.
The main reason Brooks gives for this is the extra overhead you've just added to the project in terms of communications, management, etc. In fact increasing team size always makes the team less efficient - adds more overhead, and the question is whether the new person added adds enough value to offset or overcome this.
Most experienced developers realize this intuitively - always faster to have the smallest team of the best people possible.
Of course some projects are just so huge that a large team is unavoidable, but don't think you are going to get linear speedup by adding more people. A 20 person team will not be twice as fast as a 10 person team. This is the major point of the book, and the reason for the title "the mythical man month". The myth is that men and months can be traded off, such that a "100 man month" project that would take 10 men 10 months could therefore be accomplished in 1 month if you had a team of 100. The team of 100 may in fact take more than 10 months since you just just turned a smallish efficient team into a chaotic mess.
Adding an AI "team member" is of course a bit different to adding a human team member, but maybe not that different, and the reason is basically the same - there are negatives as well as positives to adding that new member, and it will only be a net win if the positives outweigh the negatives (extra layers of specifications/guardrails, interaction, babysitting and correction - knowing when context rot has set in and time to abort and reset, etc).
With AI, you are typically interactively "vibe coding", even if in responsible fashion with specifications and guardrails, so the "new guy" isn't working in parallel with you, but is rather taking up all your time, and now his/its prodigious code output needs reviewing by someone, unless you choose to omit that step.
>> He didn’t say that.
> Actually he did, or something very close to it.
Yeah, the "something very close to it" is what I quoted. And I'll repeat: distinction matters.
> don't think you are going to get linear speedup by adding more people.
I didn't either say, nor imply, this. Of course communication and coordination is overhead. Let's quote Brooks from the same article some more: The maximum number of men depends upon the number of independent subtasks.
Which is why in modern times you have a bunch of theoretical and practical research around team topologies, DORA, Reverse Conway Manoeuvre, the push to microservices, etc, etc. You can boil all that down to "maximize team independence while making each team as productive as possible."
This is a wonderful tangent (and if this interests you, I heartily recommend the Team Topologies book), but can we just keep in mind the gp never actually said he was overhiring for a single project? Parent latched onto a wrong idea and ran with it.
reply