I'm also pretty sure 14 points font is a bit outdated at this point, 16 should probably be a minimum with current screens. It's not as if screens aren't wide enough to fit bigger text.
Haha I keep forgetting that. Fortunately the browser remembers my zoom settings per page. I'm pretty sure the font is now at 16 or something via repeated Cmd +.
10 point at 96 dpi or with correctly applied scaling is very readable. But some toolkits like GTK have huge paddings for their widgets, so the text will be readable, but you’ll lose density.
Oh that's annoying, seems to me there wouldn't have been an issue if you just merged B into A after merging A into main, or the other way around but that already works fine as you pointed out.
I mean if you've got a feature set to merge into dev, and it suddenly merges into main after someone merged dev into main then that's very annoying.
Huh interesting, my mental model is unable to see any difference between them.
I mean a branch is just jamming a flag into a commit with a polite note to move the flag along if you're working on it. You make a long trail, leave several flags and merge the whole thing back.
Of course leaving multiple waypoints only makes sense if merging the earlier parts makes any sense, and if the way you continue actually depends on the previous work.
If you can split it into several small changes made to a central branch it's a lot easier to merge things. Otherwise you risk making a new feature codependent on another even if there was no need to.
I'd probably go with something like the wave function collapse algorithm. It should be possible to make it generate trees with somewhat uniform probability.
Interesting idea, but the problem is that being connected and being non-cyclic (properties you want for a perfect maze where you can reach every location and where there is exactly one route between every two locations) are global conditions that are difficult to implement with function collapse algorithm that are local.
I think being connected is easy enough, being non-cyclic is trickier I suppose. If you do it badly the shape of the maze is going to depend on the order it's generated in. I imagine some people may have looked into it.
> being connected and being non-cyclic (properties you want for a perfect maze where you can reach every location and where there is exactly one route between every two locations)
Connected, sure, that's table stakes. But why is being non-cyclic a desirable property? (Other than it being the definition of "perfect maze", a term I've come to despise)
This. It's so far out there that I have to wonder if it's a rogue employee who thought this a good excuse to cause reputational damage without it being too obvious. Doesn't pass several razors though (not the simplest explanation; malice involved.. is that Hanlon's and Occam's razor?), so I don't truly believe it... but it would be possible
Since it's AI and Microsoft I can believe that someone who doesn't know what they're doing would be given a mandate to promote AI under any means necessary at the cost of some other team's reputation.
But it's an insane move. If anything AI has made it more important than ever to know who authored something and then someone does this to promote AI.
Occam's razor is about the simplest solution often being the correct one.
Hanlon's razor is about not assuming malice, which makes no sense when applied to faceless mega-corporations or even random strangers where you know conflicting motives exist.
Thanks for confirming I remembered the razor names correctly!
I still don't assume malice, at least as a default / until strongly indicated otherwise, from any individual employee. Emergent behavior of complex artificial incentive systems is, of course, a whole other matter so I can see what you mean that the razor won't apply there without breaking it down to an individual as in the scenario I mentioned about an ill-meaning employee
That's pretty much the reason why. Raymond Hettinger explains the philosophy well while discussing the `random` standard library module: https://www.youtube.com/watch?v=Uwuv05aZ6ug
I feel like much of this has been forgotten of late, though. From what I've seen, i's really quite hard to get anything added to the standard library unless you're a core dev who's sufficiently well liked among other core devs, in which case you can pretty much just do it. Everyone else will (understandably) be put through a PhD thesis defense, then asked to try the idea out as a PyPI package first (and somehow also popularize the package), and then if it somehow catches on that way, get declined anyway because it's easy for everyone to just get it from PyPI (see e.g. Requests).
I personally was directed to PyPI once when I was proposing new methods for the builtin `str`. Where the entire point was not to have to import or instantiate anything.
There's bound to be a way to turn a stream of bytes into a stream of unicode code points (at least I think that's what python is doing for strings). Though I'm explicitly not volunteering to write the code for it.
Oh that's neat, though I might split this into two functions in most cases, no need to entangle opening the file and counting the words in a filelike object.
That's two neat tricks that I'm definitely adding to my bag of python trickery.
Sure, but making one string from the file contents is surely much better than having a separate string per word in the original data.
... Ah, but I suppose the existing code hasn't avoided that anyway. (It's also creating regex match objects, but those get disposed each time through the loop.) I don't know that there's really a way around that. Given the file is barely a KB, I rather doubt that the illustrated techniques are going to move the needle.
In fact, it looks as though the entire data structure (whether a dict, Counter etc.) should a relatively small part of the total reported memory usage. The rest seems to be internal Python stuff.
I dislike loading files into memory entirely, in fact I consider avoiding that one of the few interesting problems here (the other problem being the issue of counting words in a stream of bytes, without converting the whole thing to a string).
If you don't care about efficiency you can just do len(set(text.split())), but that's barely worth making a function for.
That's why I included the bottom part describing how some pins are longer than others. It's sort of how some hotplugs work in most cases. First ground connects, then other set of pins and so on. So when is detected a physical act of being disconnected.
I can't really think of a polite way to phrase this, but I'm not surprised throwaway mobile apps do benefit, while relatively mature python packages do not. That matches my estimation of how much programming skill you can reasonable extract from the current LLMs.
Really the one thing that conclusively has changed is that the 'ask it on stackoverflow' has become 'ask it an LLM'. Around 95% of the stackoverflow questions can be answered by an LLM with access to the documentation, not sure what will happen to the other 5%. I don't think stackoverflow will survive a 20-fold reduction in size, if only because their stance on not allowing repeat questions means that exponential growth was the main thing preventing them from becoming stale.
> I'm not surprised throwaway mobile apps do benefit, while relatively mature python packages do not.
Right.
I don't think you even need cynicism or whatever you felt you were having impolite thoughts about:
I'd expect the top mature libraries to be the most resistant to AI tool use for various reasons. They already have established processes, they don't accept drive-by PR spam, the developers working on them might be the least likely to be early adopters, and -- perhaps most importantly -- the todo list of those projects might need the most human comms, like directional planning rather than the sort of yolo feature impl you can do in a one-man greenfield.
All to further bury signals you might find elsewhere in broader ecosystems.
I would expect nearly all of these developers to be technologically sophisticated and for most of them to have tried AI asssisted coding and to be unafraid to use it if they thought it brought some benefit.
> Note that a more complete model would multiply each term by P(track)_j — the common-mode detection-tracking-classification factor developed in the previous section — but the standard WTA formulation assumes perfect tracking.
I'm not sure that is a useful model, or more complete. I don't think you can assign interceptors to undetected missiles, so considering their effect on the value is rather pointless. It's effectively a sunk cost.
Multiplying with the probability also makes no sense from an optimisation point of view. Why would you assign lower value to a target about to be hit simply because you were unlikely to detect the missile?
The tracking probability only shows up in the meta game described at the end, where one side is trying to optimise their ability to hit valuable targets and the other is trying to optimise their ability to prevent that from happening.