COMPUTATION HAS A BODY
- 2 days ago
- 7 min read

Science progresses in bursts. And sometimes, especially given the intensity of the science-news cycle, interesting happenings make very little impact on the hypegeist. So this irregular series keeps you informed on what might otherwise go unnoticed at the intersection of science and its direct – or indirect – implications for AI.
This month, three papers piqued our interest. And they reminded us that much of the compute we all use could be considered over-engineered. Silicon is the universal substrate. Full precision is the universal assumption. Complete logs are the universal default.
None of these are free. They often feel it because they're decided upfront and work out cheap per unit. Plus, we rarely consider the alternatives.
But what if we did?
Each of the three papers shows what changes when you stop using the default and start fitting the compute to the actual operation. Which, with token use and quality spiralling in many organisations, is an ever-more interesting proposition.
1. Everything in its right place
Let's start with the substrate. Silicon is the universal answer to almost every computing question. And it's good enough that we've stopped asking whether it fits our use cases.
So a paper in npj Unconventional Computing that builds on ChemComp – a framework that translates equations into chemical reactions – is a breath of the freshest of air. It shows chemistry can be used to compute, leveraging the dynamics of the chemistry itself.
Why bother? Because equations that model how systems react to new conditions could become chemistry experiments. Or, to reverse that idea, chemistry experiments could become mini-computers.
Ideas which are neither fanciful nor futile. Biological cells already work this way. Metabolic networks perform mathematical functions. Subcellular compartments regulate signalling, transport and decay.
Of course, chemistry is not about to replace GPUs. Chemistry is slow, hard to read and difficult to scale. But it does have practical potential. Many maths problems focus on patterns that emerge from interactions - how ripples form on water, how stripes appear on a zebra, how steam condenses on glass. Chemistry is already producing those patterns naturally. So, set an experiment up the right way and the chemistry becomes a computational substrate for certain problems, at a tiny fraction of the energy silicon would burn.
Because, rather than moving electrons across transistors and shuttling bits between memory and compute, chemistry performs the operation in situ – as a side-effect of a reaction that was happening anyway.
Our takeaway? A reminder that silicon isn't always the right call. And that doing the computation close to the signal is a good way to avoid unnecessary cost.
The equivalence in AI: Most frontier AI runs on a concentrated stack of GPUs, high-bandwidth memory, networked data centres and complex software. A powerful stack – and an expensive one to power, cool and feed with data.
Which means it's not always the right answer. A local model hooked up to a sensor, a lab instrument, a memory layer and a workflow engine can be more efficient. And how it's hooked together also matters – each placement will change cost, latency, resilience and control.
2. When less is ultimately more
But whatever the substrate, let's consider the correct level of precision. Full detail is the default in much enterprise data. A spreadsheet will happily give you a number to many decimal places. And a sensor records at whatever resolution the hardware allows. Most of the time you only need 81%, or 80.9% – not 80.997641%.
We don't worry because detail in a single Excel cell, recording a single sensor reading, costs nothing extra. Yet, if there are millions of signals shipping across a network and being stored on a daily basis, the cost of being hi-res quickly adds up.
A paper in Communications Engineering wonders how to solve this and offers up a trick cells use to handle this differently.
The paper's specific case is dynamic range. The faintest detectable whisper and a thundercrack differ by a factor of millions. So do dim starlight and bright sunshine. A conventional chip records each extreme of these levels on the same fine grid – and burns power proportionally.
But cells take a different route. They record ratios, not absolute levels. A doubling matters. A single-degree change against a huge background does not. Which makes sense – you notice an extra biscuit on a plate. You don't notice an extra grain of sand on a beach. That's Weber's law.
And the authors propose a new chip design – built in simulation – that works the same way. It takes an input that varies by a factor of 10,000 and turns it into one of eight values – which band the signal is in.
Brutal compression, but the assumption is that the chip is operating in a scenario where the discarded detail isn't important.
The benefits? The simulated chip uses under a millionth of a watt. And there's a big opportunity for further improvements – the biological circuit it draws from covers an even wider range using a million times less power. The organic substrate is more efficient than silicon here too.
The equivalence in AI: A manufacturing line, hospital ward, lab instrument or insurance telemetry stream can produce a lot of data. Should the AI model be receiving that raw? Sending everything upstream creates latency, cost and governance exposure.
So there are significant advantages to asking a series of questions up front. Which signals deserve full precision? Which can be compressed? Which should always stay local?
3. The state we're (not) in
The default memory in enterprise AI is often the log. We log the expanding context being moved in, the long chat history which is manipulating it and the bigger vector store. We need records of everything that happened, queryable on demand.
But do we always need that? After ten thousand claims, an underwriter doesn't consult a log when a new case lands. Their judgement is made from their tacit knowledge. They look at a borderline case and they know the answer. The ten thousand cases shaped them; they don't need a file to open.
That's the difference between state and a record. A record is something a system queries. State is something a system carries with it – the past, baked into the shape of what's there now. Many conversations call both 'memory' and treat them as interchangeable. They aren't.
The reminder of that came from a Science Advances paper out of KAIST and GIST. The team built DNA circuits that act as memory and processor at the same time.
Which is interesting because conventional DNA circuits are one-use – they react once and they're spent. The new design uses molecules that change shape when a signal arrives and stay changed until the next signal. The molecules don't store a record of past events. They are the past, in chemical form. The next reaction starts from what's there. The team calls it a DNA bio-transistor and aims it at biosensing and diagnostics, not computing-at-scale.
The equivalence in AI: Today, every agent queries records. The model itself doesn't change between uses. The only thing that carries anything forward is the trail of context, history and notes written down somewhere. That's an architectural limit, not just an engineering choice. The kind of agent that approximates the underwriter's judgement does so by re-deriving its read from the log every time. A system with actual state wouldn't have to.
Of course, the DNA work is biology, not software. The research doesn't show how to bake state into a software agent. But it sharpens the question. As AI takes on work that depends on accumulated judgement, the architectures that will hold up may be the ones that treat state as configuration, not as something to be reconstructed every time from a longer log.
In-the-end-at-the-end?
The brain analogy is the common one we all use when talking about digital intelligence. But are we overlooking others? The cell analogy may point us in some very useful directions.
Cells are low-power, local, stateful and constrained by their environment. They don't move every signal to a central processor. They compute where the information already is. The organism benefits because computation is distributed, situated and embodied.
That is closer to some of the problems enterprise AI now faces. In science, manufacturing and regulated systems, deployment depends on the system the model inhabits: power, memory, sensors, instruments, workflow, audit and state.
The lesson is also about fit. A system has a body – its substrate, the depth of precision it records, the way it remembers – and the body shapes what the system can do and what it costs to do it.
So while the hypegeist is watching the model, the more pressing question may be: how can we improve the totality of the body?
Further reading
Chemical and physical computation
Johnson, C. G. M., Bohm Agostini, N., Cannon, W. R., & Tumeo, A. (2026). Energy-efficient scientific computing using chemical reservoirs. npj Unconventional Computing, 3, article 17. https://doi.org/10.1038/s44335-026-00053-9
Cannon, W. R., Johnson, C. G. M., Bohm Agostini, N., et al. (2026). A mathematical framework for thermodynamic computing with applications to chemical reaction networks. npj Unconventional Computing, 3, article 16. https://doi.org/10.1038/s44335-026-00057-5
Baltussen, M. G., de Jong, T. J., Duez, Q., Robinson, W. E., & Huck, W. T. S. (2024). Chemical reservoir computation in a self-organizing reaction network. Nature, 631, 549–555. https://doi.org/10.1038/s41586-024-07567-x
Bio-inspired encoding and memory
Oren, I., Gupta, V., Habib, M., et al. (2026). Harnessing synthetic biology for energy-efficient bioinspired electronics: applications for logarithmic data converters. Communications Engineering, 5, article 44. https://doi.org/10.1038/s44172-026-00589-5
Sim, J., Kim, T., Kim, W., Jeong, S., Choi, E., Kim, S., Choi, H., Yim, S. S., & Choi, Y. (2026). Reset-free DNA logic circuits for real-time input processing and memory. Science Advances, 12, article eaeb1699. https://doi.org/10.1126/sciadv.aeb1699
Physical neural computing
Fischer, S., Ay, N., Landsiedel, O., Mohammadi, E., Otte, S., Renner, B.-C., & Rußwinkel, N. (2026). Beyond Silicon: Materials, Mechanisms, and Methods for Physical Neural Computing. arXiv:2604.09833. https://arxiv.org/abs/2604.09833
Context
International Energy Agency. (2026). Key Questions on Energy and AI – executive summary. https://www.iea.org/







