Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does OpenRouter perform better than LiteLLM on integration though? I found using Anthropic's models through a LiteLLM-laundered OpenAI-style API to perform noticably worse than using Anthropic's API directly. So I've scrapped considering LiteLLM as an option. It's also just a buggy mess from trying to use their MCP server. The errors it puts out are meaningless, and the UI behaves oddly even in the happy path (error message colored green with Success: prepended).

But if OpenRouter does better (even though it's the same sort of API layer) maybe it's worth it?

 help



OpenRouter performs much, much better than LiteLLM proxy. In my experience, if OpenRouter offers a model, the API will be supported. They also often have inference providers available that will perform much better than the default provider. Just as an example, Z.ai is sitting at around 10 token/s for GLM 5.1 while friendly is doing 70 token/s for the same model through OpenRouter.

LiteLLM proxy also adds quite some overhead as well.

I have personally settled on a mix of Bifrost as my router which connects to OpenRouter or some other providers that I deem more privacy friendly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: