Hacker Newsnew | past | comments | ask | show | jobs | submit | nayuki's commentslogin

When people see that binary-float-64 causes 0.1 + 0.2 != 0.3, the immediate instinct is to reach for decimal arithmetic. And then they claim that you must use decimal arithmetic for financial calculations. I would rate these statements as half-true at best. Yes, 0.1 + 0.2 = 0.3 using decimal floating-point or fixed-point arithmetic, and yes, it's bad accounting practice when summing a bunch of items and getting a total that differs from the true answer.

But decimal floats fall short in subtle ways. Here is the simplest example - sales tax. In Ontario it's 13%. If you buy two items for $0.98 each, the tax on each is $0.1274. There is no legal, interoperable mechanism to charge the customer a fractional number of cents, so you just can't do that. If you are in charge of producing an invoice, you have to decide where to perform the rounding(s). You can round the tax on each item, which is $0.13 each, so the total is ($0.98 + $0.13) × 2 = $2.22. Or you can add up all the pre-tax items ($1.98) and calculate the tax ($0.2548) and round that ($0.25), which brings the total to $0.98×2 + $0.25 = $2.21, a different amount. Not only do you have to decide where to perform rounding(s), you also have to keep track of how many extra decimal places you need. Massachusetts's sales tax is 6.25%, so that's two more decimal places. If you have discounts like "25% off", now you have another phenomenon that can introduce extra decimal places.

If you do any kind of interest calculation, you will necessarily have decimal places exploding. The simplest example is to take $100 at 10% annual interest compounded annually, which will give you $110, $121, $133.1, $146.41, $161.051, $177.1561, etc., and you will need to round eventually. Or another example is, 10% annual interest, but computed daily (so 10%/365 per day) and added to the account at the end of the month - not only is 10%/365 inexact in decimal arithmetic, but also many decimal places will be generated in the tiny interest calculations per day.

If you do anything that philosophically uses "real numbers", then decimal FP has zero advantages compared to binary FP. If you use pow(), exp(), cos(), sin(), etc. for engineering calculations, continuous interest, physics modeling, describing objects in 3D scene, etc., there will necessarily be all sorts of rational, irrational, and transcendental numbers flying around, and they will have to be approximated in one way or another.


When writing financial software, one almost always reaches for a decimal library in that language and ends up using that instead of the language's built-in floats. (Sometimes you can use ints, but you can't once you need to do things like described above.)

Overall, yes, results need to be rounded, but it's pretty much financial software 101 not to use floats.


The one advantage of decimal floating point is that high schoolers have a better understanding of where decimal rounding happens.

This is legitimately a great explanation.

Reminds me of PG's classic essay, "Taste for Makers" (2002): https://paulgraham.com/taste.html

> AI may be making us think and write more alike

Many technologies have been doing that for centuries. For example: The printing press making books available to the masses, standardized spelling in English (it was a mess before!), radio and TV broadcasting speech thus creating more uniform accents nationwide, the Internet spreading all kinds of information globally instantly, even memes (literally thinking and writing alike).


Yeah, that doesn't mean it's not a bad thing.

Well, that escalated quickly. From skimming the first few screenfuls of the page, I thought it was going to be about a beginner programmer's first time dissecting the low-level bit structure of floating-point numbers and implementing some of the arithmetic and logic from scratch using integer operations.

And then she writes about logic cells, hardware pipeline architectures, RTL (register-transfer level), ASIC floorplans, and tapeout. Building hardware is hard, as iteration cycles are long and fabricating stuff costs real money. That was quite a journey.

Her "about" page adds more context that helps everything make sense:

> Professionally, I have a Masters in Electrical and Computer Engineering, have previously worked as a CPU designer at ARM and an FPGA engineer at Optiver. -- https://essenceia.github.io/about/


Zscaler was deployed on a Windows 11 machine at a place that a friend worked at. When I assessed the software, its behavior was downright evil like malware. As we all know, it injects its own root certificate into the operating system in order to conduct man-in-the-middle attacks on TLS/HTTPS connections to monitor the user's web activity.

Furthermore, it locks down the web browser's settings so that you cannot use a proxy server to bypass Zscaler's MITM. I saw this behavior in Mozilla Firefox, where the proxy option is set to "No proxy" and all other options are disabled and grayed out; I imagine that it does the same to Google Chrome. If you try to modify the browser's .ini(?) file for proxy settings, Zscaler immediately overwrites it against your will. Zscaler worked very hard to enforce its settings in order to spy on the computer user.

And as you'd expect, if you open up the Zscaler GUI in the system tray, you are presented with the option to disable the software if you have the IT password. Which of course, you don't have. Then again, that might be an epsilon better than the Cybereason antivirus software, which just has a system tray icon with no exit option, and cannot be killed in Task Manager even if you are a local administrator, and imposes a heavy slowdown if you're open hundreds of small text files per second.


I have a different take on the same topic: https://www.nayuki.io/page/java-biginteger-was-made-for-rsa-...

My article isn't written as a step-by-step tutorial and doesn't come with example numbers. But mine fills in certain things that xnacly doesn't cover: random prime generation, efficiently calculating the decryption exponent d from (n, e) by using a modular inverse, using modular exponentiation instead of power-then-modulo.

By the way for Python, modular exponentiation is pow(x, y, m) (since 3.0), and modular inverse is pow(x, -1, m) (since 3.8, Oct 2019). https://docs.python.org/3/library/functions.html#pow


could you add a step by step tutorial like the OP here?


> Most consumers are using laptops and laptops are not keeping pace with where the frontier is in a singular compute node. Laptops are increasingly just clients for someone else's compute that you rent, or buy a time slice with your eyeballs, much like smartphones pretty much always have been.

It really feels like we're slowly marching back to the era of mainframe computers and dumb terminals. Maybe the democratization of hardware was a temporary aberration.


True. A most basic CRC implementation is about 7 lines of code: (presented in Java to avoid some C/C++ footguns)

    int crc32(byte[] data) {
        int crc = ~0;
        for (byte b : data) {
            crc ^= b & 0xFF;
            for (int i = 0; i < 8; i++)
                crc = (crc >>> 1) ^ ((crc & 1) * 0xEDB88320);
        }
        return ~crc;
    }
Or smooshed down slightly (with caveats):

    int crc32(byte[] data) {
        int crc = ~0;
        for (int i = 0; i < data.length * 8; i++) {
            crc ^= (data[i / 8] >> (i % 8)) & 1;
            crc = (crc >>> 1) ^ ((crc & 1) * 0xEDB88320);
        }
        return ~crc;
    }
But one reason that many CRC implementations are large is because they include a pre-computed table of 256× 32-bit constants so that one byte can processed at a time. For example: https://github.com/madler/zlib/blob/7cdaaa09095e9266dee21314...


That's java code, though... bit weird, esp. i % 8 (which is just i & 7). The compiler should be able to optimize it since 'i' is guaranteed to be non-negative, still awkward.

Java CRC32 nowadays uses intrinsics and avx128 for crc32.


With C++20 you can use consteval to compute the table(s) at compile time from template parameters.


Just like that author, many years ago, I went through the process of understanding the DEFLATE compression standard and producing a short and concise decompressor for gzip+DEFLATE. Here are the resources I published as a result of that exploration:

* https://www.nayuki.io/page/deflate-specification-v1-3-html

* https://www.nayuki.io/page/simple-deflate-decompressor

* https://github.com/nayuki/Simple-DEFLATE-decompressor


Those resources were a huge help when I was digging into the DEFLATE algorithm, thank you!


> Microsoft PowerToys feels like something that shouldn’t exist in Windows today. What started in 2019 as a couple of utilities for things like window and shortcut management has gradually expanded to nearly 30 useful tools

Year 2019? Oh no, not by a longshot. I've been using Microsoft-branded PowerToys since Windows XP, around the year 2001. Wikipedia says:

> PowerToys are available for Windows 95, Windows XP, Windows 10, and Windows 11 (and explicitly not compatible with Windows Vista, 7, 8, or 8.1).

https://en.wikipedia.org/wiki/Microsoft_PowerToys


They've been around since the 1980s, only they were called TSRs then and were mostly by third parties rather than Microsoft.

Not sure why this is news, there are a bazillion little Windows utilities out there not by Microsoft, they just aren't called PowerToys and Microsoft-branded.


A journalist writing about something technology relayed abd getting their facts wrong? The devil, you say?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: