I probably used the wrong word. I meant more about managing volume property so we dont have data loss, backups, replication etc etc. I assume going managed is easier if you can pay for it (e.g. RDS)
If you're in this comment section, consider play-testing your website. Find someone who has never used it and watch them explore it for the first time, while they think out loud, without giving them any help. My personal website had links to GitHub, LinkedIn, etc. on the home page, and the first thing my brother in law did was leave the site, without ever looking at any of my posts, which were indexed on another page.
This example might be obvious to you, but I guarantee there's something you can learn through play-testing.
I think this comment section would be a good place to ask for help with a related problem.
I want to design a "smooth" closed path that fits in a square, as long as I can make it. It needs to smooth in such a way that a constant velocity can be maintained subject to limits on acceleration and jerk. The point is to develop a test for maximum flow rate on a filament 3D printer without the motion system ever slowing the tool-head down. (In reality, the standard smoothed "E" shape is good enough. This is more of an exercise.)
I know roads are designed to limit acceleration and jerk, and someone knowledgeable about road design would know about finding curves with constraints on acceleration and jerk.
Do you know any free tools or resources on theory that I could use?
How much information is there in knowing the length of someone's password?
If we know the password's length, it saves us from guessing any shorter passwords. For example, for a numeric password, knowing the length is 4 saves us from having to guess [blank], 0-9, 00-99 and 000-999. This lowers the number of possibilities from 1111 to 1000. The password has 90% of it's original strength. A [0-9a-zA-Z] password retains 98% of it's original strength
For any given alphabet A, and for any positive integer n, the set of strings of length n over A is a finite set, with (number of characters in A)^n elements.
The set of all strings, of any length over A, is an infinite set, because it is the union of all sets of strings of length n for each positive integer n.
So if you don't know the length of the password, there are infinite possibilities. If you do know the length of the password, there are only finite possibilities.
Which would in turn imply that there is an infinite amount of information in knowing the length of a password - the complement of the set of n-length strings over A in the set of strings over A contains an infinite number of elements, which you can safely exclude now that you know the password is part of the finite set of n-length strings over A.
Only if the password is infinitely long. Which it isn't. The only way knowing the length shaves off a significant amount of time during bruteforcing is if the password is already so short that the time save isn't relevant in the first place.
Absolute nonsense. Apart from the fact that password length is necessarily finite due to memory and time constraints, passwords aren't stored as clear text. You will get hash collisions, because the number of unique hashes is very much finite.
Your argument therefore doesn't apply in this context.
IMO, the holy grail of 3d dithering is yet to be achieved. runevision's method does not handle surfaces viewed at sharp angles very well. I've thought a lot about a method with fractal adaptive blue noise and analytic anisotropic filtering but I don't yet have the base knowledge to implement it.
My take on it is to use some arbitrary dithering algorithm (e.g. floyd-steinberg, blue noise thresholding, doesn't really matter) for the first frame, and for subsequent frames:
1. Turn the previous dithered framebuffer into a texture
2. Set the UV coordinates for each vertex to their screenspace coordinates from the previous frame
3. Render the new frame using the previous-framebuffer texture and aforementioned UV coords, with nearest-neighbor sampling and no lighting etc. (this alone should produce an effect reminiscent of MPEG motion tracking gone wrong).
4. Render the new frame again using the "regular" textures+lighting to produce a greyscale "ground truth" frame.
5. Use some annealing-like iterative algorithm to tweak the dithered frame (moving pixels, flipping pixels) to minimize perceptual error between that and the ground truth frame. You could split this work into tiles to make it more GPU-friendly.
Steps 4+5 should hopefully turn it from "MPEG gone wrong" into something coherent.
That's a very narrow view of the world. One example: In the past I have handled bilingual english-arabic files with switches within the same line and Arabic is written from left to right.
There are also languages that are written from to to bottom.
Unicode is not exclusively for coding, to the contrary, pretty sure it's only a small fraction of how Unicode is used.
> Somehow people didn't need invisible characters when printing books.
They didn't need computers either so "was seemingly not needed in the past" is not a good argument.
Yes, it is. Unicode has undergone major mission creep, thinking it is now a font language and a formatting language. Naturally, this has lead to making it a vector for malicious actors. (The direction reversing thing has been used to insert malicious text that isn't visible to the reader.)
> Unicode is not exclusively for coding
I never mentioned coding.
> They didn't need computers
Unicode is for characters, not formatting. Formatting is what HTML is for, and many other formatting standards. Neither is it for meaning.
The fact is that there were so many character sets in use before Unicode because all these things were needed or at least wanted by a lot of people. Here's a great blog post by Nikita Prokopov about it: https://tonsky.me/blog/unicode/
Of course! I bet there are tons of ideas that didn't make it into Unicode, for better of worse. Where you draw the line is kind of arbitrary. You, personally, can of course opt out of all of that by restricting yourself to ASCII only, for example. But the rest of the world will continue to use Unicode.
My early compilers used code pages to work with Japanese, French and German customers. The original idea of Unicode was absolutely brilliant and I was all for it. D was an early total adopter of Unicode (C and C++ followed years later). I rejected code page support for D.
It's mission was to support all the letters in all the languages, which was a good straightforward mission. But then came fonts, formatting, layout, rendering, casing, sort ordering, normalization, combining, vote-for-my-letter-and-Ill-vote-for-yours, emoji, icons, semantic meanings, elvish, people who invent things and campaign to put them in so they'll leave a mark in history, ...
True. And nearly all of them are obsolete. Many were intended for control flow on an interactive terminal, which have long since passed into obsolescence. When was the last time you embedded a CTRL-C in text? The only ones that matter any more are newline and space.
reply