Hacker Newsnew | past | comments | ask | show | jobs | submit | zeusk's commentslogin

> copper can barely support 10G and is terribly power hungry when it does that.

AFAIK, thunderbolt cables are also copper - so what trickery do they use for supporting USB4-80? i believe both connectors use differential pair wires for signalling.


It's simply length. Ethernet is expected to work on 50-100m runs, while USB4 specifies maximum cable lengths of 2m even for just 5gbps (at least for passive cables). 80gbps is 0.8m

The longer Thunderbolt (which is actually just USB4) cables internally use fiber optics for data transmission, with converters to copper in each connector. Even the medium-distance (3 meter) ones have signal quality boosters in each connector matched to the kind of signal degradation that kind of cable will experience.

Completely passive TB4/TB5 cables max out at about 80 centimeters.


Besides what retatop and crote said - USB4 uses either binary or PAM-3, 10GBASE-T uses PAM-16. Higher modulation means lower frequency bandwidth (so longer and/or crappier cable), but also more current-hungry line driver and more current required to keep noise manageable.

Half life alyx and their push for openVR has made a big impact in that part of the gaming world.

But yeah, their games are just as filled with lootbox, crates, skin garbage as other low effort money grabs; saving grace being its all cosmetics only (and they’re private about their financials).


ordering lock acquisition is a tested strategy to avoid deadlocks; so locking the cache lines sorted by PA would cover that?


This is very spot on in my experience.


> My personal dream is that vms would support pci pass through and so you can just spin up a Linux vm and let it drive the gpus.

SR-IOV is just that? and is well supported by both Windows and Linux.


Yes- that's what I referring to. Basically the virtualization framework supporting handing a specific PCIe device off to a VM. Link management is still handled by macOS but the actual PCIe packets are handled by the VM (which could be windows or linux, which would have a GPU driver)


They’re used by the internal register renamer/allocator so if it sees you’re storing the results to memory then reusing the named register for a new result - it will allocate a new physical register so your instruction doesn’t stall for the previous write to go through.


I do not understand what you want to say.

The register renamer allocates a new physical register when you attempt to write the same register as a previous instruction, as otherwise you would have to wait for that instruction to complete, and you would also have to wait for any instructions that would want to read the value from that register.

When you store a value into memory, the register renamer does nothing, because you do not attempt to modify any register.

The only optimization is that if a following instruction attempts to read the value stored in the memory, that instruction does not wait for the previous store to complete, in order to be able to load the stored value from the memory, but it gets the value directly from the store queue. But this has nothing to do with register renaming.

Thus if your algorithm has already used all the visible register numbers, and you will still need in the future all the values that occupy the registers, then you have to store one register into the memory, typically in the stack, and the register renamer cannot do anything to prevent this.

This is why Intel will increase the number of architectural general-purpose registers of x86-64 from 16 to 32, matching Arm Aarch64 and IBM POWER, with the APX ISA extension, which will be available in the Nova Lake desktop/laptop CPUs and in the Diamond Rapids server CPUs, which are expected by the end of this year.

Register renaming is a typical example of the general strategy that is used when shared resources prevent concurrency: the shared resources must be multiplied, so that each concurrent task uses its private resource.


> When you store a value into memory, the register renamer does nothing, because you do not attempt to modify any register.

you are of course correct about everything. But the extreme pendant in me can't avoid pointing out that there are in fact a few mainstream CPUs[1] that can rename memory to physical registers, at least in some cases. This is done explicitly to mitigate the cost of spilling. edit: this is different from the store-forwarding optimization you mentioned.

[1] Ryzen for example: https://www.agner.org/forum/viewtopic.php?t=41


That feature does not exist in any AMD Zen, but only in certain Zen generations and randomly, i.e. not in successive generations. This optimization has been introduced then removed a couple of times. Therefore this is not an optimization on whose presence you can count in a processor.

I believe that it is not useful to group such an optimization with register renaming. The effect of register renaming is to replace a single register shared by multiple instructions with multiple registers, so that each instructions may use its own private register, without interfering with the other instructions.

On the other hand, the optimization mentioned by you is better viewed as an enhancement of the optimization mentioned by me, and which is implemented in all modern CPUs, i.e. that after a store instruction the stored value persists for some time in the store queue and the subsequent instructions can access it there instead of going to memory.

With this additional optimization, the stored values that are needed by subsequent instructions are retained in some temporary registers even after the store queue is drained to the memory as long as they are still needed.

Unlike with register renaming, here the purpose is not to multiply the memory locations that store a value so that they can be accessed independently. Here the purpose is to cache the value close to the execution units, to be available quickly, instead of taking it from the far away memory.

As mentioned at your link, the most frequent case when this optimization is efficient is when arguments are pushed in the stack before invoking a function and then the invoked function loads the arguments in registers. On the CPUs where this optimization is implemented the passing of arguments to the function bypasses the stack, becoming much faster.

However this calling convention is important mainly for legacy 32-bit applications, because the 64-bit programs pass most arguments inside registers, so they do not need this optimization. Therefore this optimization is more important for Windows, where it is more frequent to use ancient 32-bit executables, which have not been recompiled to 64-bit.


Yes, it is not in all Zen cpus.

I don't think it makes sense to distinguish it from renaming. It is effectively aliasing a memory location (or better, an offset off the stack pointer) with a physical register, effectively treating named stack offsets as additional architectural registers. AFAIK this is done on the renaming stage.


The named stack offsets are treated as additional hidden registers, not as additional architectural registers.

You do not access them using architectural register numbers, as you would do with the renamed physical registers, but you access them with an indexed memory addressing mode.

The aliasing between a stack location and a hidden register is of the same nature as the aliasing between a stack location from its true address in the main memory and the location in the L1 cache memory where the the stack locations are normally cached in any other modern CPU.

This optimization present in some Zen CPUs just caches some locations from the stack even closer to the execution units of the CPU core than the L1 cache used for the same purpose in other CPUs, allowing those stack locations to be accessed as fast as the registers.


The stack offset (or in general memory location address[1]) has a name (its unique address), exactly like an architectural register, how can it be an hidden register?

In any case, as far as I know the feature is known as Memory Renaming, and it was discussed in Accademia decades before it showed in actual consumer CPUs. It uses the renaming hardware and it behaves more like renaming (0 latency movs resolved at rename time, in the front end) than an actual cache (that involves an AGI unit and a load unit and it is resolved in the execution stages, in the OoO backend) .

[1] more precisely, the feature seems to use address expressions to name the stack slots, instead of actual addresses, although it can handle offset changes after push/pop/call/ret, probably thanks to the Stack Engine that canonicalizes the offsets at the decode stage.


This has already been tried :)

iirc, in the 2016 a quadcore intel cpu ran the original crysis at ~15fps


Get the DGX Spark computers? They’re exactly what you’re trying to build.


They’re very slow.


They're okay, generally, but slow for the price. You're more paying for the ConnectX-7 networking than inference performance.


Yeah, I wouldn’t complain if one dropped in my lap, but they’re not at the top of my list for inference hardware.

Although... Is it possible to pair a fast GPU with one? Right now my inference setup for large MoE LLMs has shared experts in system memory, with KV cache and dense parts on a GPU, and a Spark would do a better job of handling the experts than my PC, if only it could talk to a fast GPU.

[edit] Oof, I forgot these have only 128GB of RAM. I take it all back, I still don’t find them compelling.


the TB5 link (RDMA) is much slower than direct access to system memory


Nvidia has been investing in confidential compute for inference workloads in cloud - that covers physical ownership/attacks in their thread model.

https://www.nvidia.com/en-us/data-center/solutions/confident...

https://developer.nvidia.com/blog/protecting-sensitive-data-...


It's likely I'm mistaken about details here but I _think_ tee.fail bypassed this technology and the AT article covers exactly that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: