Hacker Newsnew | past | comments | ask | show | jobs | submit | tkgally's commentslogin

> Claude Code is brilliant for personal apps.

Agreed.

The clipboard manager I had been using on my Macs for many years started flaking out after an OS update. The similar apps in the App Store didn’t seem to have the functionality I was looking for. So inspired by a Simon Willison blog post [1] about vibe coding SwiftUI apps, I had Claude Code create one for me. It took a few iterations to get it working, but it is now living in the menu bar of my Mac, doing everything I wanted and more.

Particularly enlightening to me was the result of my asking CC for suggestions for additional features. It gave me a long list of ideas I hadn’t considered, I chose the ones I wanted, and it implemented them.

Two days ago, I decided I wanted a dedicated markdown editor for my own use—something like the new markdown editing component in LibreOffice [2] but smaller and lighter. I asked the new GPT 5.5 to prepare an outline of such a program, and I had CC implement it. After two vibe coding sessions, I now have a lightweight native Mac app that does nearly everything I want: open and create markdown files, edit them in a word-processing-like environment, and save them with canonical markdown formatting. It doesn’t handle markdown tables yet; I’ll try to get CC to implement that feature later today.

[1] https://simonwillison.net/2026/Mar/27/vibe-coding-swiftui/

[2] https://news.ycombinator.com/item?id=47298885


Could you share the source to your Markdown editor? I'm always looking for new ones

Here's one person's feedback. After the release of 4.7, Claude became unusable for me in two ways: frequent API timeouts when using exactly the same prompts in Claude Code that I had run problem-free many times previously, and absurdly slow interface response in Claude Cowork. I found a solution to the first after a few days (add "CLAUDE_STREAM_IDLE_TIMEOUT_MS": "600000" to settings.json), but as of a few hours ago Cowork--which I had thought was fantastic, by the way--was still unusable despite various attempts to fix it with cache clearing and other hacks I found on the web.

I had it produce a two-page manga with Japanese dialogue. Nearly perfect:

https://www.gally.net/temp/20260422-chatgpt-images-2-example...


If it’s any consolation, this problem of discrepancies in rules is very common at universities now.

I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.

Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.

AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.

And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.


That's my guess, too. I live in Japan and eat at fast food places from time to time. One feature of McDonald's is that the food preparation area is almost always visible from the customer area; I can see the people assembling the burgers, handling the fries, etc. At Yoshinoya and other domburi places, even though the shop is much smaller than a McDonald's, I am usually unable to see the person actually putting the rice and toppings into the bowls.

I suspect that efficiency of layout is the top priority in both cases, but I wouldn't be surprised if McDonald's is also consciously trying to show that their food is human-prepared, both in the store design and in their food photos.


It's about communication, the cashier needs to be able to shout "I need a Big Mac no pickles" and have the grill person hear it.

The new ones near me now have touch menu that customers enter and swipe payment instead of cashiers and the grill area is no longer visible.


Biggumakku!

Here is one sign:

“... the most uncomfortable question here is not whether ChatGPT is making teenagers worse at thinking. It is whether the education system ...”

“This is not cognitive dissonance in any simple sense. It is something more structurally interesting ...”

“... opting out is not a principled stand. It is a competitive disadvantage.”

“The students are not confused. They are trapped.”

“... choosing not to use AI is not intellectual integrity. It is self-sabotage.”

“... the problem is not that education cannot protect against cognitive offloading, but that most education systems are not currently designed to do so.”

“... cognitive offloading is not a convenience. It is a developmental short-circuit.”

“... happening not through careful pedagogical planning, but through exhaustion...”

“... students are adopting AI not because they have been taught to use it critically, but because nobody has given them a compelling reason not to.”

“These investments are not philanthropic gestures. They are strategic plays ...”

“These are not neutral actors offering disinterested tools. They are companies with revenue models ...”

“... they are not just choosing a product; they are choosing a pedagogical philosophy ...”

“... Khanmigo is designed not to give answers directly. Instead, it employs a Socratic method ...

“AI did not break the system. It revealed, with uncomfortable clarity, what the system was always building toward ...”


Learning formal music theory helped me a lot when I was a teenager playing guitar and piano, and having absorbed all that theory more than fifty years ago helps me to continue enjoy playing and improvising music now.

But over the years I realized that there were gaps in the Western classical theory I studied.

A relatively small one is that I never systematically studied jazz harmony, and I still don’t have a good sense for it. I can’t make my improvisations sound like jazz even if I try.

Another, bigger gap is rhythm: I have listened over the years to music from all over the world with interesting and complex rhythms, but I cannot explain those rhythms or reproduce them. The classical notation and theory I learned is not up to that task, either.

The biggest gap, in my mind, is my lack of exposure to any formal theory of melody. I like good melodies, I think I have a sense of some features that separate good melodies from drab ones, I think I am able to create pretty good melodies, but that all came from listening and experimentation and playing. I once (again, more than fifty years ago) looked through some music theory books in my college library that covered melody, but I didn’t get anything useful out of them.

The videos on music theory that crop up on my YouTube feed all seem to be about chords and scales. Maybe some music influencers should start producing in-depth content on rhythm and melody, too.


I may be wrong but I don't think there is a theory of melody in as there is go harmony or counterpoint. A good melody can't be constructed from rules. That's what makes them magical.

As I write this, I think when a melody sounds good it's likely related to the implied harmony in the notes being used, and obviously the expectations the listener gets and how they're handled. But I don't think there is a system of constructing good melodies in Western classical music theory.


I'd say that once you understand practical harmony, counterpoint, diminutions, common schemata, some basic elements of form, you've pretty much understood what classical music theory has to say about melody too. There's definitely an element of playing with expectations in a fully "creative" and rule-free way, but knowing the theory underneath is how you understand what the expectations are.


When I show people personal projects I’ve vibe-coded with Claude Code, they often seem impressed and envious. They come up with ideas for things that they would like to do, too. But they have full-time jobs outside of IT, and when I mention they might need to use the terminal to do what I’m doing now their eyes glaze over.

A couple of such people, after they learned about Claude Cowork, signed up for Anthropic subscriptions and are now using it in their jobs. But overall my impression is that there is still huge potential demand from regular people who use computers for agentic systems with less barrier to entry, and that many will be willing to pay for such systems when more mature and user-friendly ones arrive.


Most of humanity hasn't figured out they need to adapt yet. It's a bit like email and the internet in the mid nineties. People had heard about it but hadn't really embraced it yet. Five years later most people with white collar jobs had email addresses. Fifteen years later, billions of internet capable smartphones were in circulation.

The AI revolution is following a similar adoption curve. Right now many of the tools are only really usable if you are a developer or at least not too shy making AI agents use developer tools on your behalf. That's not going to stay like that for very long. It's going to be a messy transition that will likely take much longer than some people seem to think. But eventually most people doing knowledge work will be leaning heavily on all sorts of AI agents to do their thing. And quite a few will have to learn new skills as most of the stuff they still do manually today just goes away as a thing that you do manually.

Like the mid nineties, these are amazing times for people with a slight head start over everybody else. Which is why there is such an investment frenzy around AI right now. Lots of possibilities where lots of money might be made. And lots of things that won't work out. And lots of people really not seeing the forest for the trees as well. And generally behaving like headless chickens. But the internet in the end proved to be not a fad and it didn't all go back to normal after the hype died and the .com bubble burst.

IMHO, the bubble around AI is not so much the technology but things like data center and energy pricing. The cost of data center production is long term a fraction of current cost (dominated by GPUs costing tens of thousands of dollars). Likewise cheap and plentiful energy to power them is going to eventually cost a lot less. Short term scarcity eats up a lot of billions right now. But you'd be mistaken to confuse that for long term structural cost. Cost is going to come down and that will drive adoption. And that's before you consider edge compute on commodity phones and laptops. There will be billions of devices running small AI agents. Add robotics to the mix and it's a whole new world.

In short, companies like OpenAI and Anthropic are valued so high because all of that is happening right now. Yes, it's a bit of a bubble. But stuff will definitely happen.


On the other hand, the productivity gains from AI automation are so large that you are forced to use it to compete in the workplace, even if you strongly dislike the terminal, you will dislike homelessness more.


Nice observation about AI-generated content:

> I’ve had the idea that from a social perspective it’d be regarded like plastic surgery, in that it only looks weird when its over-done, or done badly.


An important aspect of comparison is that nobody is going to tell you that your surgery is noticeable or looks bad.

Your friends, family, partners, coworkers, aren't going to say anything, neither are people you meet casually, certainly not service workers, strangers aren't going to pull you aside to tell you the truth about your nose job, etc.

I hope the same social taboo doesn't transfer over to AI content. We should honestly critique AI generated content, used either in-whole or in-part with human creations. If the inclusion of AI content botched your article, saying so should be socially acceptable.

We saw some of this here on HN. It used to be that when AI content would be submitted here, it was a social faux pas to even mention it was LLM generated, same thing with LLM generated comments, no matter how obvious it was. Mentioning a comment was AI was socially verboten and you'd be finger-wagged at.

Eventually, AI fatigue caused the community to discount Show HN entries, submissions and comments, and the signal to noise ratio could no longer be ignored.

Now, turn on showdead. Those same comments, that users were expected to interact with as if they were made in good faith by real people, litter every submission's comment section. These comments objectively hurt discussion and it's a good thing they're shadowbanned.

Culturally, I hope we can reach a point where critique of AI content, including code, doesn't brand critics as haters, Luddites, or worse, and stifle conversation about what our communities really value and want.


I like the idea of promoting honest feedback on AI-generated content socially. My experience, especially on LinkedIn, is not only that it might be some sort of social taboo to do so, but also that the algorithm kind of hinders it: if you post something and you get comments from people obviously using AI bots to comment on other posts, you could either ignore it and tell the person, or just accept the fact that it probably is AI-written and still write an answer as if it were not. The issue with the algorithm is that it rewards the latter.


We definitely must come from different cultures because I raised my eyebrow quite high reading your comment.

My family, close friends, and my partner will definitely tell me when I neglect or abuse my mental and physical health. This includes bad decisions about the way I look.

And every time they do this I am thankful to them, because they usually notice these things way sooner than I would have.


> An important aspect of comparison is that nobody is going to tell you that your surgery is noticeable or looks bad.

Just post a picture on the internet and let strangers comment. You will absolutely get honest feedback, but you probably don’t really want that. TBH same with code and ideas, given the reception my articles have had over the years on HN and Reddit. Can be brutal.


Your friends, family, partners, coworkers, aren't going to say anything about your natural appearance either. Unless they're super rude.


People will tell you if you're good looking


> Now, turn on showdead. Those same comments, that users were expected to interact with as if they were made in good faith by real people, litter every submission's comment section.

One big issue I've found is that HN seems to automatically comments from all new users, no matter the content. I used to try to change handles every so often because HN doesn't allow people to delete their comments after the first hour, which becomes a bigger and bigger privacy issue over time (and frankly, extremely hostile to users). Especially for those of use who don't use AI, our individual writing styles are likely identifiable over a long enough period of time.

But the last few times I tried it, all of my comments were immediately shadowbanned. No notification or any indication on the new account, but if I checked with an older account, the comments were all "dead." I try to put effort into my comments, reading through the entirety of the comment I'm replying to (often multiple times), proofreading them myself (I never use AI), and linking to any claims I'm making. All of this takes considerable time. It's extremely frustrating to put that kind of effort into a comment and have it autobanned. It's even more frustrating when the system deceives you and makes you believe it's been posted, and you have to check with another account to learn that it was actually set to dead.

Supposedly there's a desire for comments that people put effort into and aren't written by AI. But why would new users bother putting in that work when their comments get automatically and secretly killed, without them having any way of knowing?

I'm starting to think that the best solution is to move away from these types of online communities in general.


It's the same way with writing as with video. There are some videos now where it's actually hard to tell. You can only tell it's AI when it's bad. When it's done well, you don't even know it's AI.

So it creates this selection effect where people only associate AI with fake and bad. The good stuff, they don't associate with AI at all.


But there is also the case where you see polished apps but are ai generated. It's like those ai websites they look "sleek" but all look the same, versus a crappy same that it doesn't look as pretty but looks very human. I don't know quite how to put it


It's funny you mention that. The only difference is sometimes you need a functionality without doing the plumbing. At the end of the day if you're getting the output you need, the process doesn't matter. It's an interesting analogy but only works if the inspector is another expert dev.


When I have such moment and I take a step back, there’s usually a strong hint that there’s a meta problem behind those instances. And while you have to chose when to take the time to solve such problem, it’s usually worth it.


I wish I could always take the time to do things right, but in reality, time is extremely scarce resource. It's when these AI agent help the most.


It looks like my one-star repository [1] came close to making this person's leaderboard for number of commits (currently 5,524 since January, all by Claude Code). I'm not sure what that means, though. Only a small percentage of those commits are code. The vast majority are entries for a Japanese-English dictionary being written by Claude under my supervision. I'm using Github for this personal project because it turned out to be more convenient than doing it on my local computer.

[1] https://github.com/tkgally/je-dict-1


Make your own Github: forgejo.org

One used Lenovo micro PC (size of a book) from eBay will serve you well.


Thanks for the recommendation. I didn’t know about forgejo.org.

The main convenience of Github for me is the ability to send preprepared prompts to Claude through its web interface or the mobile app and have it write or revise a batch of dictionary entries in the repository. I can then confirm the results on the built website, which is hosted on Github Pages, and request changes or reverts to Claude when necessary. Each prompt takes ten to thirty minutes to carry out and I run a dozen or more a day, and it is very convenient to be able to do that prompting and checking wherever I am.

When I have Claude make changes to the codebase, I find that I need to pay closer attention to the process. I can’t do that while sitting in restaurant or taking a walk like I do with the prompting for dictionary-entry writing. The next time I start a mostly (vibe) coding project, I’ll look into Forgejo.


Just don't put it online, because the AI scrapers will find it and crawle it so aggressively that your mini/micro-PC will blow up.


This is awesome. Your repo is now two stars.


Thanks! The dictionary should be more or less finished in a few months. If you or anyone else might find it helpful for studying Japanese, feel free to use it, copy it, and adapt it however you like.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: