The fundamental idea is that "intelligence" really means trying to shorten the time to figure out something. So it's a tradeoff, not a quality. And AI agents are doing it.
Therefore, if that perspective is right, the issues that the OP describes are inherent to intelligent agents. They will try to find shortcuts, because that's what they do, it's what makes them intelligent in the first place.
People with ASD or ADHD or OCD, they are idiot-savants in the sense of that paper. They insist on search for solutions which are not easy to find, despite the common sense (aka intelligence) telling them otherwise.
It's a paradox that it is valuable to do this, but it is not smart. And it's probably why CEOs beat geniuses in the real world.
CEOs beat geniuses in the real world because they often have other pathologies, like enough moral flexibility to ignore the externalities of their profit centers.
I'd also argue there's some training bias in the performance, it's not just smart shortcuts... Claude especially seems prone to getting into a 'wrap it up' mode even when the plan is only half way completed and starts deferring rather than completing tasks.
> The fundamental idea is that "intelligence" really means trying to shorten the time to figure out something.
"Figure out" implies awareness and structured understanding. If we relax the definition too much, then puddles of water are intelligent and uncountable monkeys on typewriters are figuring out Shakespeare.
If you were caught with notebooks detailing your plans to kill a list of people, showing that you've meticulously tracked their movements and listing locations for dumping the bodies that would be extremely relevant. I don't see how it'd be a good idea to exclude that kind of evidence.
When Agile came about to company (large American corp) I work for, around 2015 (arguably quite late), I was quite skeptical. In my opinion, a decent waterfall (basically a sort of compromise) worked pretty well, and I didn't like fake "innovations" like Scrum or renaming everything in project management terminology.
Then I read Steve Yegge's Good Agile, Bad Agile. It basically says, Agile is just a Kanban queue. And I think I got it, and I think that's working very well. At least from the project management side.
There are IMHO three management angles to look at any engineering project - product, project and architecture. If you are building a house, you need a blueprint to tell where to put what concrete, you need a render (or paper model) that you show to a customer, and you need a BOM and a timeline to make the investors happy. The software is not different. But that's also where there are misunderstandings in what Agile is - the product management, project management and engineering all have different ideas what kind of "plan" is needed.
So in the case of software, specs are like the house's blueprint. In some cases, specs might be useful prototype, in some cases not. It's just not the type of plan that the project or product management cares about.
Regarding the project management angle, for me Agile today is clearly Kanban, and almost everything else is wrong or not required. I often make an analogy with computers. In the 50s and 60s, people tried to plan the work that the computer calculates by creating some scheduling algorithms that plan ahead the use of resources, avoid conflicts and such. Eventually, we found out that simple dispatch queues work the best, don't estimate at all how long the task will take. Just give it a priority, a time slice, and let it run. And I think the same applies for the SW development. And I think it's time that project management people take note from computer scientists - they already know.
Doesn't mean SW development time cannot be estimated if you need to, it's just not a very efficient to do so (it takes extra time, depending on how good estimate you want).
I would agree, it makes them anything but elementary. I am honestly not even sure if there is a finite constructible basis of the functions that can express any solution of single-variable integer polynomials.
And for multivariate polynomials, the roots are uncomputable due to MRDP theorem.
It is not known, and the model problem for this is Hilbert's 13th [1].
Nonetheless, "elementary function" is a technical term dating back to the 19th century; it's very much not a general adjective whose synonym is "basic".
Nevertheless, it is a horrible definition. Mathematicians have often taken care to define things as close to everyday intuition as they could (and then proving an equivalence). The "elementary function" in this definition is just a weird mix of concerns.
The proof that free markets are efficient (even in the narrow sense economists use this word) relies on an assumption of perfect information. This has been known at least since Akerlof.
The Misesian folks are a lost cause, IMHO. They're hardcore rationalists, self-indulging in circular moral arguments from assumptions that don't apply in the real world.
That's what makes the insider trading argument so tantalizing--it's arguing that it helps move the market closer to perfect information. But, of course, the world is complicated and dynamic, and it tacitly depends on all kinds of assumptions and beliefs about the resulting costs and benefits. It would be nice if the debate shifted to pinning down those assumptions, quantifying them as best as possible, and then iteratively tweaking and adjusting regulatory models. But that's true of just about everything and probably too unrealistic an ask, especially at a time when one side is convinced markets are just a mechanism for unjust exploitation, and the other side is convinced regulation is what sustains inequity (to the extent inequity is something even worth caring about).
1. One should also add absolute value (as sqrt(x*x)?) as a desired function and from that min, max, signum in the available functions. Since the domain is complex some of them will be a bit weird, I am not sure.
2. I think, for any bijective function f(x) which, together with its inverse, is expressible using eml(), we can obtain another universal basis eml(f(x),f(y)) with the added constant f^-1(1). Interesting special case is when f=exp or f=ln. (This might also explain the EDL variant.)
3. The eml basis uses natural logarithm and exponent. It would be interesting to see if we could have a basis with function 2^x - log_2(y) and constants 1 and e (to create standard mathematical functions like exp,ln,sin...). This could be computationally more feasible to implement. As a number representation, it kinda reminds me of https://en.wikipedia.org/wiki/Elias_omega_coding.
4. I would like to see an algorithm how to find derivatives of the eml() trees. This could yield a rather clear proof why some functions do not have indefinite integrals in a symbolic form.
5. For some reason, extending the domain to complex numbers made me think about fuzzy logics with complex truth values. What would be the logarithm and exponential there? It could unify the Lukasiewicz and product logics.
> It exists to exchange a future nuclear war with Iran with a conventional war today.
That's just ridiculous. Nobody can predict the future, so trading uncertain war in the future for a certain war today is completely irrational. (And for the same reason, the war today is unlikely gonna be easier than the war tomorrow.)
Besides, Iran has avoided having nuclear weapon, because it causes too many civilian casualties, and that's against their beliefs. In this, they're more civilized than Americans (and Europeans), despite that this might be considered to be an irrational view by barbarians like you.
I think you're just coping with the fact that this war was utterly pointless, destructive for almost everyone in the world, and a poor attempt to increase power by a small group of people.
Former Iranian Majles member Ali Motahari said in an April 24, 2022 interview on ISCA News (Iran) that when Iran began developing its nuclear program, the goal was to build a nuclear bomb. He said that there is no need to beat around the bush, and that the bomb would have been used as a "means of intimidation" in accordance with a Quranic verse about striking "fear in the hearts of the enemies of Allah."
"When we began our nuclear activity, our goal was indeed to build a bomb,” former Iranian politician Ali Motahari told ISCA News. “There is no need to beat around the bush,” he said.
Read the last two lines of that interview. Khamenei interpreted Islam as forbidding even building the bomb, and he is the moral authority on this, like it or not.
Japan could also have built a nuclear bomb, but chose not to. They decided that out of nothing else than their moral beliefs.
You simply don't want to accept than other cultures can be (in some respects, and even regardless of what individuals think on average - that's probably similar for large enough groups) more ethical than your own.
Iran enriched over 450kg of uranium to at least 60%.
There's no need for anything over 5% for powerplant use. They were preparing HEU for weapons; whether those weapons were to be built now or in 20 years is irrelevant.
Per international agreements, it was their right. The idiotic thing about this argument is that now everyone knows they want nukes and that not having ones is strategic mistake. Because Iran and Ukraine did not have one. Meanwhile, countries with nukes are safer.
Yes, I agree, except it's not irrelevant whether they built functional nuke or not, because this is used as a justification for war. (Not to mention, as a justification for war, "they could have built a nuke" is even more barbaric than "they have built a nuke".)
Still, that doesn't counter the fact they didn't actually make a nuclear bomb out of the material, nor the fact that their highest moral authority banned them from doing that, so it doesn't do anything to disprove that culturally they are more civilized (in that respect).
(Maybe an example from a corporation would clarify this better - the fact that there is a group of people in it doing things unethically doesn't mean that the company as a whole condones this behavior, even if structurally - how the corporation or capitalist society is constructed - might lead to some people doing it internally off the books. But once it is known to the CEO - the highest moral authority in a corporation, if he is not to be implicated in this, he must tell them to stop.)
It's frankly just moving the goalpost in an attempt not to accept your own barbarism. Is your culture OK with using nuclear weapons, even in self-defense? If yes, how do you dare to judge?
> their highest moral authority banned them from doing that
This means nothing. Iran says one thing publicly, then privately does another. Ayatollah Ali Khamenei said his country will not develop ballistic missiles with a range exceeding 2,000 kilometers [0]; yet they secretly developed missiles with a range of 4,000 km [1].
Personally, I think we're using LLMs wrong for programming. Computer programs are solutions to a given constraint logic problem (the specs).
We should be using LLMs to translate from (fuzzy) human specifications to formal specifications (potentially resolving contradictions), and then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
Full program inference from specs is actually a very hard problem, because the compiler/SAT solver cannot autonomously derive loop invariants (or, similarly, inductive hypotheses) that are necessary to write correct code. So using a LLM that can look at the spec and provide a heuristic solution makes a lot of sense. Obviously the solution still has to be verified, though.
Perhaps you meant to say "coding", not "programming". AI is immensely helpful for programming. Coding is just the last, and in a proper programming session sometimes even unnecessary step - there are times when an adequate investigation requires deleting code rather than writing new one, or writing pages of documentation without a single code change.
You have to be a detective and know what threads to pull to rope in the relevant data, digging inductively and deductively - soaring high to get the "big picture" of things and diving into the depths of a single code line change.
I've been developing software for decades now (not claiming to be great, but at least I think I've built certain intuition and knack for it), and I always struggled with the "story telling" aspect of it - you need to compose a story about every bug, every feature request - in your head, your notes, your diagrams. A story with actors, with plot lines, with beginning, middle, and end. With a villain, a hero, and stakes. But software doesn't work that way. It's fundamentally an exploratory, iterative, often chaotic process. You're not telling what happened - you're constructing a plausible fiction that satisfies the format. The tension I felt for decades is that I am a systems thinker being asked to repeatedly perform as a narrator, and that is hard.
Modern AI is already capable of digging up me the details for my narrative - I gave it access to everything - Slack, Jira, GitHub, Splunk, k8s, Prometheus, Grafana, Miro, etc. - and now I can ask it to explain a single line of code - including historical context, every conversation, every debate, every ADR, diagram, bug and stack trace - it's complete bananas.
It doesn't mean I don't have to work anymore, if anything, I have to work more now, because now I can - the reasons become irrelevant (see Steve Jobs' janitor vs. CEO quote). I didn't earn a leadership role - AI has granted it? Forced me into it? Honestly, I don't know anymore. I have mixed feelings about all of it. It is exciting and scary at the same time. Things that I dreamed about are coming true in a way that I couldn't even imagine and I don't know how to feel about all that.
In case you’re not familiar, I will point you to the classical program synthesis literature. There the task is to take a spec written in say first-order logic, and output a program that satisfies this spec.
I think the biggest barrier to adoption of program synthesis is writing the spec/maintaining it as the project matures. Sometimes we don’t even know what we want as the spec until we have a first draft of the program. But as you’re pointing out, LLMs could help address all of these problems.
There is an interesting on-going research https://dnhkng.github.io/posts/sapir-whorf/ that shows LLMs think in a language-agnostic way. (It will probably get posted to HN after it is finished.)
I would expect that. But I’d also expect the pattern of their thoughts to look more varied in structure like C or German, and less like totally uniform s-expressions.
The fundamental idea is that "intelligence" really means trying to shorten the time to figure out something. So it's a tradeoff, not a quality. And AI agents are doing it.
Therefore, if that perspective is right, the issues that the OP describes are inherent to intelligent agents. They will try to find shortcuts, because that's what they do, it's what makes them intelligent in the first place.
People with ASD or ADHD or OCD, they are idiot-savants in the sense of that paper. They insist on search for solutions which are not easy to find, despite the common sense (aka intelligence) telling them otherwise.
It's a paradox that it is valuable to do this, but it is not smart. And it's probably why CEOs beat geniuses in the real world.
reply