3,000 times per second. That's fast enough to trace electrical signals flowing through brain circuits.
It's incredible how much slower brain signals are than what we are used to dealing with in modern electronics. To debug a CPU in my laptop I'd need a scope with multi-GHz sampling rates.
Power consumption grows with the square of frequency. Brain working frequency is just like 200Hz. 100T of synapses (or sizeable share of it) at that frequency puts us into hundreds tera-ops/second of calculation power - all at just 20watt of electrical power. To get evolutionary smarter though we possibly would have to devote bigger share of our body 100watt energy budget or find a way to produce more energy.
I'm not confident it's a direct comparison between the analog operations that happen in the brain and the digital integer or floating point operations of a CPU. For instance, a single 20-transistor operational amplifier can perform (analog) integration or differentiation. It takes way more CPU operations and operations per second to perform a real-time integration or differentiation -- and even that's a discrete approximation limited by the operating frequency (edit: not than an op-amp doesn't have its own frequency limitations of course).
Analog computers can perform dramatically more complex work, faster, and with a fraction of the resources -- at the cost of accuracy. The brain is able to consolidate and handle various analog processes to generate what we perceive as the "work product."
Re: 200Hz, it's unlikely the brain operates synchronously so it's unlikely to have an operating frequency of any rate let alone 200Hz. There are of course local systemic limitations such as the refresh rate of the eye. That's more the frequency defined by the "capacitive" capability of the retina.
A refresh rate in a digital system is global: the clock is the source of truth and great pains are taken to ensure clock skew (the difference in time between when the clock edge rises at one end of the wire vs the other) is within a tightly bounded range to ensure coherence of the overall system. An analog combinational computer would just do its work as the data arrives -- not at the rising or falling edge of some 200Hz signal subject to physiological limitations.
Do you know of any good papers on analog computers?
They sound like interesting tools, especially since there are some sorts of problems where small amounts of unpredictable error can be helpful, like repetitive Monte Carlo-style simulations.
But it also seems like an intimidatingly large area to start learning about.
> A refresh rate in a digital system is global: the clock is the source of truth
I suspect you're already aware of this, but nonetheless - asynchronous digital logic (ie no clock) does exist! I've never had the chance to play with it though.
Definitely, also known as combinational logic (as compared to sequential logic). It's interesting stuff for sure, and I've wondered what a combinational CPU could look like. I should have said "in a synchronous digital system" so my bad. Frequency doesn't apply to combinational logic.
Digital systems are almost exclusively a pairing of combinational logic with digital "checkpointing" or pipelining.
With that in mind, in digital systems, as the clock speed increases the difference between combinational and sequential logic drops. Analog provides yet another layer of computational capability on top of that.
>Power consumption grows with the square of frequency
You may be thinking of the power of a mechanical wave, which is composed of the kinetic energy of particles in e.g. a vibrating string. A CPU's power consumption is linear with regard to clock frequency, and I'd hazard a guess the brain is similar. If consumption were quadratic, you'd basically be able to get power-free computation by parallelizing to a large number of very slow processors.
> A CPU's power consumption is linear with regard to clock frequency.
That doesn't sound completely accurate. It's fairly well known that when overclocking processors, the "highest" achievable clocks can often take substantially more power than the frequency increase. eg 25% more power for (say) 8% frequency increase.
So, it might be a linear thing for most of the range, but it doesn't seem to be 100% linear in all cases.
I'm talking mainly about dynamic (switching) power. I'm not sure about overclocking, but I believe it tends to involve also increasing voltage (power does scale quadratically with voltage) and a higher internal temperature (this will also lead to increased power). The parent was looking at the lower end though and implying that there are big wins to be had if only we could do our computation at around 200Hz, which I don't think is justified.
When you have to be extremely noise/fault tolerant (e.g. you can literally remove half a brain and still function normally [1]), you over-design and over-engineer the system with massive redundancies everywhere. We don't know how much information is actually encoded in these trillions of low precision synapses. Our best deep learning models tend to be very over-parametrized, and are highly compressible. A CPU is nowhere as fault tolerant but it does not have to be, because modern solid state electronics is extremely reliable (and has much wider ranges of operating conditions than human brains).
“It takes a radiation dose of about 5 Sv to cause death to most people. Diodes and computer chips will show very little functional detriment up to about 50 to 100 Sv“.
It's to note that "little functional detriment" contributes to a catastrophic failure in a digital system. A flip of a single bit can fell the thing you're trying to operate. Humans respond with much more delay and can usually complete most any mission, even if they die of complications later.
> 100T of synapses (or sizeable share of it) at that frequency puts us into hundreds tera-ops/second of calculation power
Considering the top supercomputers can do petaflops, in terms of pure computation, humans are now permanently surpassed. And in a generation or so, our smartphones will be able to do petaflops.
It makes you wonder, if computers can now process faster, store more data, have greater bandwith, etc, why is it still "behind" the human brain? Perhaps we are missing some aspect of the human brain that makes it special?
> Perhaps we are missing some aspect of the human brain that makes it special?
It's an analog computer not a digital one. The computational power of a 100T transistor analog computer would be truly staggering -- orders of magnitude more powerful than a 100T transistor digital computer.
So cool. In my opinion, so many of the open problems in biology can be approached from the direction of specificity - from specificity in measurement to specificity in targeting interventions. For example, tools like this for elucidating cognition; alternatively, cancer (identifying and targeting problematic cells).
>Understanding information processing in the brain requires us to monitor neural activity in vivo at high spatiotemporal resolution. Using an ultrafast two-photon fluorescence microscope (2PFM) empowered by all-optical laser scanning, we imaged neural activity in vivo at up to 3,000 frames per second and submicron spatial resolution. This ultrafast imaging method enabled monitoring of both supra- and sub-threshold electrical activity down to 345 μm below the brain surface in head fixed awake mice.>
It's incredible how much slower brain signals are than what we are used to dealing with in modern electronics. To debug a CPU in my laptop I'd need a scope with multi-GHz sampling rates.