Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
vlovich123
on Dec 17, 2024
|
parent
|
context
|
favorite
| on:
New LLM optimization technique slashes memory cost...
Modern LLMs are still quite inefficient in their representation of information. We're at like the DEFLATE era and we've still yet to invent zstd where there's only marginal incremental gains; so right now there's a lot of waste to prune away.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: