just my perspective: i pay $20/month and i hit usage limits regularly. have never experienced performance degradation. in fact i have been very happy with performance lately. my experience has never matched that of those saying model has been intentionally degraded. have been using claude a long time now (3 years).
i do find usage limits frustrating. should prob fork out more...
Having done a quick search of "control AI dot com", it seems their intent is educate lawmakers & government in order to aid development of a strong regulatory framework around frontier AI development.
Not sure how this is consistent with "One private company gatekeeping access to revolutionary technology"?
> strong regulatory framework around frontier AI development
You have to decode feel-good words into the concrete policy. The EAs believe that the state should prohibit entities not aligned with their philosophy to develop AIs beyond a certain power level.
And what is malicious about that ideology? I think EAs tend to like the smell of their farts way too much, but their views on AI safety don't seem so bad. I think their thoughts on hypothetical super intelligence or AGI are too focused on control (alignment) and should also focus on AI welfare, but that's more a point of disagreement that I doubt they'd try to forbid.
q/kdb+ is used in finance (banking + funds) for heavy numerical computation every day. high-volume realtime data straight from markets, and petabyte/trillion-row historical DBs. it runs on CPU but computation easily parallelizes over cores/clusters.
Thanks, that gives me better feel for it. Mostly analytics, good with large datasets, but probably not great for things where you get a big gain from GPU?
q is good with bulk operations on compact arrays; these are cache-friendly and the interpreter can utilize cache-level parallelism. and with q it's convenient to go from idea -> MVP in short time. it's a high-level language with functional features so expressing algos and complex logic is natural.
but it's interpreted and optimized for array ops. so really latency-critical (e.g. high-freq trading) or highly scalar logic will be done with C++. the trade-off is convenience of development.
Are dogs, or pigs, or whales, part of the intelligence club? They are clearly intelligent beings with problem-solving skills. We won't be teaching them basic calculus any time soon.
No non-human animals are in the club that's marked by having a language with an infinitely generative syntax and a large (100,000+ words) and always-growing vocabulary.
Intelligence might be a spectrum, but powerful generative language is a step function: you have it or you don't. If you have it, then higher intelligences can communicate complex thoughts to you, if you don't they can't. We have it, so we are in the club, we are not cockroaches.
there are many humans who could study mathematics for a lifetime and not be able to comprehend the current best knowledge we possess. i'm one of them. maybe it takes 2 lifetimes. or many more.
a human-level AI operating at machine pace would learn much more than could ever be taught to a human. our powerful generative language capabilities wouldn't matter - it's far beyond our bandwidth. especially so for a superhuman-level AI.
The fact that AIs will have some information that we cannot understand, or will have more information than they can transmit (or we can absorb) does not make us cockroaches.
The AIs will deliver to us truly massive quantities of information, every minute, until the end of time, much of it civilization-changing. Thus the AIs relationship to us will thus be nothing like our relationship to cockroaches, where we essentially cannot tell them anything, not even the time or the day of the week, let alone the contents of Wikipedia.
I think Hofstadter is having an emotional reaction to AI. He says so as much. And it'a a common one, it's the woe is me phase. But I think he's totally wrong about the analogy. I'm 100% sure we will not feel like cockroaches when AI is in full swing, not in the slightest.
do you think there are any lessons that can be applied to a "normal" interpreter/compiler written in standard C? i'm always interested in learning how to reduce the size of my interpreter binaries
Hard to say. I’m fairly sure that all of modern software could easily be 2-3 orders of magnitude smaller. But, the world has decided (and I think rightfully so) that it doesn’t matter. We have massive memories and storage systems. Unless you have a very constrained system (power, etc) then I think the big bloated, lots of features approach is the winner (sadly).
You can't have the goal for the code to generate revenue, at equal priority for it to be easy for lots of inexperienced programmers to work on (i.e. maximise costs), because the only time you're going to choose "readability" and "unit tests" (and other things with direct costs) is because of doubts you have in your ability to do the former without this hedging of your bets a little. If you disagree, you do not know what I said.
And so they ask: What is the next great software unicorn? Who knows, but being able to rapidly put stuff in front of consumers and ask is this it? has proven to be an effective way at finding out. And if you go into a product not knowing how (exactly) to generate revenue, you can't very well design a system to do it.
Do you see that? Software is small when it does what it is supposed to by design; Software is big when it does so through emergence. Programmers can design software; non-programmers cannot. And there are a lot more non-programmers. And in aggregate, they can outcompete programmers "simply" by hiring. This is why they "win" in your mind.
But this is also true: programmers who know what exactly makes money can make as much money as they want. And so, if someone has a piece of tiny software that makes them a ton of money, they're not going to publish it; nobody would ever know. 2-3 orders of magnitude could be taking a software product built by 100 people and doing it with 1-2. The software must be smaller, and that software I think "wins" because that's two people sharing that revenue instead of 100+management.
To that end, I think learning how to make tiny C compilers is good practice for seeing 2-3 orders of magnitude improvement in software design elsewhere, and you've got to get good at that if you want the kind of "wins" I am talking about.
> sure that all of modern software could easily be 2-3 orders of magnitude smaller
niklaus wirth thought similarly... in 1995![0]
i enjoy implementing array langs (k primarily) so small binaries come with the territory. really appreciated your write-up. i may try something similar for an array lang.
you might also appreciate (or detest) 'b'[1][2] - a small implementation of a "fast c compiler (called b: isomorphic to c)". it has some similarities to SectorC.
> The question about if an AI is "alive" seems entirely irrelevent outside of a philosophy class
it's entirely relevant. we should know if we are building conscious beings, especially at scale (which seems like a likely future). that poses all sorts of ethical questions which ought to reach far beyond the confines of a lecture hall's walls.
Yes, it will offend some people, who in turn will demand political action, which was entirely my point to begin with.
ChatGPT could be placed inside a realistic looking animatronic doll that looks like a defenseless little girl and you would have people demanding to protect "her" rights. Yet people "kill" chatGPT each time they delete a conversation without a bat of an eye even if it's the exact same thing.
The real danger giving AI political agency and it will come from humans, not AI itself.
i do find usage limits frustrating. should prob fork out more...
reply