Hacker Newsnew | past | comments | ask | show | jobs | submit | fearface's commentslogin

I let the AI first generate a outline of how it would do it as markdown. I adapt this and then let it add details into additional markdown files about technical stuff, eg how to use a certain sdk and so on. I correct these all. And then I let the AI generate the classes of the outline one by one.


First of all, they are not violating any license or terms in any form. They add value and enable thousands of people to use local LLMs, that would not be able to do that so easy otherwise. Maybe llama.cpp should mention that Ollama takes care of easy workable access to their functionality…


> First of all, they are not violating any license or terms in any form.

IANAL but from what I understand likely debatable at least. You'll notice I said "sleazy" and didn't touch on license, potential legal issues, etc.

I'm pointing out that other projects that are substantially based/dependent on other pieces of software to do the "heavy lifting" nearly always acknowledge it. An example being faster-whisper which is a good corollary and actually has "with CTranslate2" right in the heading[0] with direct links to whisper.cpp and CTranslate2 immediately following.

Ollama is the diametric opposite of this - unless you go spelunking through commits, etc you'd have no idea that Ollama doesn't do much in terms of the underlying LLM. Take a look at llama.cpp to see just how much "Ollama functionality" it provides.

Then look at /r/LocalLLaMA, HN, etc to see just how many Ollama users (most) have no idea that llama.cpp even exists.

I don't know how this could be anything other than an attempt to mislead people into thinking Ollama is uniquely and directly implementing all of the magic. It's pretty glaring and has been pointed out repeatedly. It's not some casual oversight.

> They add value and enable thousands of people to use local LLMs, that would not be able to do that so easy otherwise.

The very first thing I said going so far as to mention commits, model zoo, etc while specifically acknowledging the level of effort and added value.

> Maybe llama.cpp should mention that Ollama takes care of easy workable access to their functionality…

Are you actually suggesting that enabling software should mention, track, or even be aware of the likely countless projects that are built on it?

[0] - https://github.com/SYSTRAN/faster-whisper


The llama.cpp license does actually require attribution, which I'm not sure exactly how ollama is complying with.


No, weird that you’re comment is so high up, but shows how little crypto knowledge is around in HN. A Bitcoin is a Bitcoin, but it consists of smaller units called Satoshis. Like Dollar and Cents. Since the beginning of Bitcoin. A fork is a copy of the underlying source code of Bitcoin, which in itself has no value at all. You would also need to find people who run nodes for your fork and start convincing exchanges that your fork should be supported too…so no, this is not splitting Bitcoin


I thought it was the reverse. Satoshi is the unit at the code level and Bitcoin is just a representation for UX reasons.


Yes this is accurate, satoshi is the unit and Bitcoin is more of a UX thing


If you fork the bitcoin blockchain you have now two ledgers, each one telling you that a certain number of bitcoins really exist. Who is to say that one is right and the other is wrong? In this scenario, it's debatable how many bitcoins are in existence.


You might want to read the Bitcoin whitepaper. There is zero ambiguity in the situation you describe. That is the essence of the proof-of-work (or proof-of-stake, take your pick) solution to the Byzantine generals problem.


The ambiguity has to do with the fact that we don't know for sure which blockchain will prevail, because the protocol is unenforceable.


Microsoft and many others.


The core usecase might be to have anchor and a cert-manager in k8s connected to it and then be able to generate valid certificates for non-public services. Also they would use solely private DNS.


You can create a self-signed CA in cert-manager directly already, which has the advantage that the private key never leaves your infrastructure, you don't need to create a login account on some external service to do it, it will work fine behind an airgap, and you can use your existing DNS domain instead of having to use Anchor's "lcl.host" which seemingly requires all of your queries to resolve "private" URLs now have to go to public DNS servers.


Can you elaborate on this? We have some 300 internal APIs on a valid domain. We used to use let’s encrypt, but got rate limited for obvious and fair reasons when we were migrating between clusters. It’s a bit better with zerossl, but we still get 429s when cert-manager is issuing a ton of certs at the same time.


Just wanted to clarify that `lcl.host` is a service that only helps with local development, it's not useful (and shouldn't be used) in staging & production environments. For staging & production, we let customers use a public domain they own, or a special use domain (`.local`, `.test`, `.lan` etc).

Here's how the architecture you described works with Anchor: assuming your domain is `mycorp.it`, you can add it to your organization. Then create staging & production environments. This provisions a stand-alone CA per environment, and the CA is name constrained for the environment (e.g. only `*.stg.mycorp.it` in staging). Each of the 300 APIs can be registered as a service: this provisions an intermediate CA per environment that is further name constrained (e.g. `foo-api.stg.mycorp.it` in staging). For each service in each environment you generate a set of API tokens (EAB tokens in ACME parlance) that allows your automation to provision server certs with the ACME client of your choice. edit: in your case, cert-manager would be the acme client delegating to Anchor.


Yes, can certainly delegate cert-manager to a CA in Anchor, which gives you a nice view into the cert material in use in your environment. And the client package support automates the toil of updating all your apps or images trusted root CA certs.


I think the idea of try/catch is to let the error bubble up to the place where it can be handled. It usually results into having error handling in a few central places. Your example in github is IMO not how to make best use of try/catch.

Here’s lots of typical exception handling patterns: http://wiki.c2.com/?ExceptionPatterns

Persobally I prefer exceptions over boilerplate “if’s”, but good to know that there’s a wrapper for the people who don’t.


The reason why there’s a ton of attention in nuclear disaster is because of the damage potential.


You don't need to split an application into containers, you can have your full app in a container no problem. You can have all your jars in the container. Using containers doesn't mean you know have to split your apps in different ways, it means you ship the runtime, compiled code, whatever libraries are required and all that in a bundle.


TIL some people believe desktop computing is more natural than spatial computing


Believing (and preaching) is easy. Some tangible evidence is harder.


I fully agree to that experience. Next thing I tried was to have three screens with different DPI’s. It’s possible (that’s a lie, since you can’t adjust DPI and scaling properly in Linux at all, you’ll end up with a half working ugly system and irrational behavior when you move windows around), but sadly my notebook needs to work in more than one place…

I’ve invited a a long time Linux user who claimed to never had any problems whatsover to have a look. His verdict was that I should have only one external screen and use the same screen in the different places.

This stuff works with Windows and Mac out of the box.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: