What do we expect from artificial life studies, generally speaking?
What do we expect from artificial life studies, generally speaking?
The goal is to see interesting _emergent_ behaviors: basically, the goal is create systems based on a certain number of parameters and functions and to let them evolve, and to see them display _nontrivial emergent properties_
The goal is to see interesting emergent behaviors: basically, the goal is create systems based on a certain number of parameters and functions and to let them evolve, and to see them display nontrivial emergent properties
Of course, nontrivial is something that is conceivably hard to define, but it is my firm belief that we can do this at the very least intrinsically in terms of machine learning models
Of course, nontrivial is something that is conceivably hard to define, but it is my firm belief that we can do this at the very least intrinsically in terms of machine learning models
Something that we would like to use this research for is to shed light on mechanisms underlying biological life, abstracting them away from the instances of life on this planet, and allowing us to explore their potential more generally (how can life emerge in other planets, in other media, in labs, in virtual environments?)
Something that we would like to use this research for is to shed light on mechanisms underlying biological life, abstracting them away from the instances of life on this planet, and allowing us to explore their potential more generally (how can life emerge in other planets, in other media, in labs, in virtual environments?)
The questions of the summer of 2023
The questions of the summer of 2023
The ideas of Von Neumann explained clearly how in the case of cellular automata, the existence of universal self-replicators can be guaranteed from ideas that one can loosely call 'Turing-completeness' (though this is a term that is not used by Von Neumann), and that one can loosely link to the Kleene fixed-point theorem that guarantees the existence of quines in Turing-complete programming languages
The ideas of Von Neumann explained clearly how in the case of cellular automata, the existence of universal self-replicators can be guaranteed from ideas that one can loosely call 'Turing-completeness' (though this is a term that is not used by Von Neumann), and that one can loosely link to the Kleene fixed-point theorem that guarantees the existence of quines in Turing-complete programming languages
This was the reason why the cellular automaton view was favored by Von Neumann over the so-called 'kinematic' view which he initially favored, where a creature should make copies of itself by assembling building blocks that it would find (or gather) around it (which still is a very reasonable description of how cells and animals reproduce)
This was the reason why the cellular automaton view was favored by Von Neumann over the so-called 'kinematic' view which he initially favored, where a creature should make copies of itself by assembling building blocks that it would find (or gather) around it (which still is a very reasonable description of how cells and animals reproduce)
The project that comes the closest to achieving this is the Aliens project, but while the initial versions seemed to be about this precisely, it seems to have diverged in terms of objectives from that goal (which was never explicitly stated, anyway)
The project that comes the closest to achieving this is the Aliens project, but while the initial versions seemed to be about this precisely, it seems to have diverged in terms of objectives from that goal (which was never explicitly stated, anyway)
A key advantage of the kinematic view is that it is easy to make it explicitly compatible with the laws of physics, while the cellular automaton view seems much more abstract (it is not clear what the Von Neumann's 29-state cellular automaton is living on... is the grid a discretization of the universe? are there any conservations laws?)
A key advantage of the kinematic view is that it is easy to make it explicitly compatible with the laws of physics, while the cellular automaton view seems much more abstract (it is not clear what the Von Neumann's 29-state cellular automaton is living on... is the grid a discretization of the universe? are there any conservations laws?)
Can the two approaches (kinematic and cellular) be combined?
Can the two approaches (kinematic and cellular) be combined?
Also, due to the abstraction (which is both a blessing and a curse), it is not clear at what level we are modeling things anyway: the smallest self-reproducing entity in an animal is probably the cell (or perhaps some subset of the cell containing the DNA), and so the very large basic self-reproducing entities in a cellular automaton are conceivably only cells of perhaps larger organisms... but at the same time this is never made clear or acknowledged
Also, due to the abstraction (which is both a blessing and a curse), it is not clear at what level we are modeling things anyway: the smallest self-reproducing entity in an animal is probably the cell (or perhaps some subset of the cell containing the DNA), and so the very large basic self-reproducing entities in a cellular automaton are conceivably only cells of perhaps larger organisms... but at the same time this is never made clear or acknowledged
All of this led to the following natural (but largely unstudied) question
All of this led to the following natural (but largely unstudied) question
What do we realize, if we try to make kinematic cellular automaton?
What do we realize, if we try to make kinematic cellular automaton?
That the cellular automaton on a grid should probably not be literally taken as a model of 'the world' (even though there are some stylized models of the world that can be represented by cellular automata, and even though this is a implicitly the vision pushed by certain physicists, and perhaps Wolfram), at least as far as artificial life is concerned
That the cellular automaton on a grid should probably not be literally taken as a model of 'the world' (even though there are some stylized models of the world that can be represented by cellular automata, and even though this is a implicitly the vision pushed by certain physicists, and perhaps Wolfram), at least as far as artificial life is concerned
What is important in Von Neumann's construction is the fact that we can emulate a Turing machine (quite literally, e.g. with a tape and a reader) in there, but there is no particularly deep reliance on e.g. translational symmetry of the grid (we need enough space to make a tape, a reader, and a constructor, but that is basically it)
What is important in Von Neumann's construction is the fact that we can emulate a Turing machine (quite literally, e.g. with a tape and a reader) in there, but there is no particularly deep reliance on e.g. translational symmetry of the grid (we need enough space to make a tape, a reader, and a constructor, but that is basically it)
Can Reaction-Diffusion Models be Understood to be Turing Complete?
Can Reaction-Diffusion Models be Understood to be Turing Complete?
At the same time, automata like Von Neumann's don't seem to respect any laws of physics (conservation of momentum, energy, mass, charge, etc.), so it is a confusing exercise to try to make them compatible with any grid discretization of physics
At the same time, automata like Von Neumann's don't seem to respect any laws of physics (conservation of momentum, energy, mass, charge, etc.), so it is a confusing exercise to try to make them compatible with any grid discretization of physics
So, what I ended up thinking was that it was indeed important to have a system with discrete states living on a graph, but that if things were to be plausible, there would be no reason for that graph multiplied by time to be a discretization of space-time
So, what I ended up thinking was that it was indeed important to have a system with discrete states living on a graph, but that if things were to be plausible, there would be no reason for that graph multiplied by time to be a discretization of space-time
Rather, it seemed that the cellular automaton ought to represent internal states of physical entities which would already have some sufficient complexity of their own (i.e. they would be big, have some kinematic internal states) and these internal states _would just describe an information layer_ of the physical system made of floating components, and the other state variables of these components (e.g. their physical location, their energy) could (should?) be modeled _separately_ (i.e. _not with a cellular automaton, but rather a classical physics model_), making it much easier to be physically realistic
Rather, it seemed that the cellular automaton ought to represent internal states of physical entities which would already have some sufficient complexity of their own (i.e. they would be big, have some kinematic internal states) and these internal states would just describe an information layer of the physical system made of floating components, and the other state variables of these components (e.g. their physical location, their energy) could (should?) be modeled separately (i.e. not with a cellular automaton, but rather a classical physics model), making it much easier to be physically realistic
Can we understand the Turing machines within an environment?
Can we understand the Turing machines within an environment?
What would the kinematic cellular automaton look like?
What would the kinematic cellular automaton look like?
There would be floating components (we can think of these as 'complex chemicals') with discrete internal states and some kind of bonds (corresponding to actual chemical bindings) which would exist based on the proximity of the corresponding components, that would make components 'neighbors' (in a graph-theoretic sense of the term), and the discrete internal states would evolve according to 'cellular-automaton-like' rules, as a function of the internal states of the neighbors and some _incoming external signals_ interacting with the physics layer (e.g. the sensitivity of the component to external forces or to light)
There would be floating components (we can think of these as 'complex chemicals') with discrete internal states and some kind of bonds (corresponding to actual chemical bindings) which would exist based on the proximity of the corresponding components, that would make components 'neighbors' (in a graph-theoretic sense of the term), and the discrete internal states would evolve according to 'cellular-automaton-like' rules, as a function of the internal states of the neighbors and some incoming external signals interacting with the physics layer (e.g. the sensitivity of the component to external forces or to light)
In addition to influencing the internal states of the neighbors (via the evolution rule), the internal states would also influence the nature of the bonds (whether they are truly 'binding', and if yes, what the rest lengths are), the friction, and possibly to emit _outgoing external signals_ that will affect or propagate through the physical layer
In addition to influencing the internal states of the neighbors (via the evolution rule), the internal states would also influence the nature of the bonds (whether they are truly 'binding', and if yes, what the rest lengths are), the friction, and possibly to emit outgoing external signals that will affect or propagate through the physical layer
The cellular automaton rules would enable one to construct a 'nervous system' and a 'brain' in a 'creature', which would be an assembly of many components, and the creature could ensure its physical integrity by moving (via the internal states it would have the way to control the rest lengths of the bonds, thus mimicking muscles, and it could move), and generally taking actions
The cellular automaton rules would enable one to construct a 'nervous system' and a 'brain' in a 'creature', which would be an assembly of many components, and the creature could ensure its physical integrity by moving (via the internal states it would have the way to control the rest lengths of the bonds, thus mimicking muscles, and it could move), and generally taking actions
In addition to 'surviving', the creature (which at this point could be a cell, an organ, an animal, or a colony) could also be in some cases to replicate by consuming (e.g. 'aggregating') components it would find (either in raw form or by 'dissolving' them from other 'creatures' it would 'hunt and eat') and using its 'brain' (which at the cell's level would include the DNA and the ribosome) and its 'muscles' it would assemble the consumed components
In addition to 'surviving', the creature (which at this point could be a cell, an organ, an animal, or a colony) could also be in some cases to replicate by consuming (e.g. 'aggregating') components it would find (either in raw form or by 'dissolving' them from other 'creatures' it would 'hunt and eat') and using its 'brain' (which at the cell's level would include the DNA and the ribosome) and its 'muscles' it would assemble the consumed components
How to make an interesting kinematic cellular universal self-replicator?
How to make an interesting kinematic cellular universal self-replicator?
I think it is a very important and interesting challenge to create a model of floating components where a universal self-replicator can be built, basically an entity made of the components that would be complex enough so that put in a sea of floating components it would start making copies of itself, and even evolve
I think it is a very important and interesting challenge to create a model of floating components where a universal self-replicator can be built, basically an entity made of the components that would be complex enough so that put in a sea of floating components it would start making copies of itself, and even evolve
This looks very doable to me (and it would be very interesting to do it), though it is fairly long and involves a number of design choices
This looks very doable to me (and it would be very interesting to do it), though it is fairly long and involves a number of design choices
A viable strategy to construct (i.e. engineer) this creature would be to first construct something like a nervous system which we could control (i.e. stimulate) externally, first manually, then with some trained external artificial agent doing it
A viable strategy to construct (i.e. engineer) this creature would be to first construct something like a nervous system which we could control (i.e. stimulate) externally, first manually, then with some trained external artificial agent doing it
Then, we could engineer a brain made of these components to perform the computations necessary to what was initially done by the external artificial agent, so that the creature could live (and of course, now, the process of the replication should also include that of the brain)
Then, we could engineer a brain made of these components to perform the computations necessary to what was initially done by the external artificial agent, so that the creature could live (and of course, now, the process of the replication should also include that of the brain)
Where is this going in terms of theory?
Where is this going in terms of theory?
If we look at what questions we would be answering by creating a universal kinematic and cellular self-replicator:
If we look at what questions we would be answering by creating a universal kinematic and cellular self-replicator:
From there, where would we go?
From there, where would we go?
So we have a _medium_, i.e. the set of specific rules for the way the components update their internal states and bindings and outgoing signals as a function of the neighbors' internal state
So we have a medium, i.e. the set of specific rules for the way the components update their internal states and bindings and outgoing signals as a function of the neighbors' internal state
Now, if the medium has a little bit of noise in it, it becomes a priori a bit more challenging to make a creature that will self-sustain and reproduce (with a high enough probability, at least)
Now, if the medium has a little bit of noise in it, it becomes a priori a bit more challenging to make a creature that will self-sustain and reproduce (with a high enough probability, at least)
That being said, at the expense of growing the number of components, we should be able to add some resilience, and to make creatures that will benefit from mutation abilities (i.e. if they have some code store somewhere in the 'brain', that code could undergo mutations), i.e. they will (statistically speaking) reproduce faster in changing environments (and typically some will be able to compete with other variants of the creatures)
That being said, at the expense of growing the number of components, we should be able to add some resilience, and to make creatures that will benefit from mutation abilities (i.e. if they have some code store somewhere in the 'brain', that code could undergo mutations), i.e. they will (statistically speaking) reproduce faster in changing environments (and typically some will be able to compete with other variants of the creatures)
And in principle, we should have realized some of the visions of Von Neumann for universal resilient and mutating self-replicators in synthetic media, and this could conceivably lead to the first open-ended evolution system that would take into account physical constraints (the other open-ended evolution systems are things like Tierra, which are very far removed from the physical world)
And in principle, we should have realized some of the visions of Von Neumann for universal resilient and mutating self-replicators in synthetic media, and this could conceivably lead to the first open-ended evolution system that would take into account physical constraints (the other open-ended evolution systems are things like Tierra, which are very far removed from the physical world)
The construction of one would show that in some specific medium, it is possible to construct a specific universal (i.e. containing 'code' information) self-sustaining and self-reproducing creature that could 'eat' the stuff around it to make copies of itself (and also mutate if needed) and would be of 'moderate' size (i.e. many orders of magnitude below Avogadro's number)
The construction of one would show that in some specific medium, it is possible to construct a specific universal (i.e. containing 'code' information) self-sustaining and self-reproducing creature that could 'eat' the stuff around it to make copies of itself (and also mutate if needed) and would be of 'moderate' size (i.e. many orders of magnitude below Avogadro's number)
This would show that the creation of something that would like life to us (if we were to discover such a process on another planet say, we would probably call this life) is not very hard for some specific medium, if we engineer the creature
This would show that the creation of something that would like life to us (if we were to discover such a process on another planet say, we would probably call this life) is not very hard for some specific medium, if we engineer the creature
Now, two very important questions would of course be:
Now, two very important questions would of course be:
Could an engineered creature emerge out of a random soup of things if we wait for long enough (i.e. can random evolution 'discover' the bits of information needed to make the process)?
Could an engineered creature emerge out of a random soup of things if we wait for long enough (i.e. can random evolution 'discover' the bits of information needed to make the process)?
How specific is it to that form of medium?
How specific is it to that form of medium?
Both questions are questions about the set of _rules_ and _configurations_, a space in which it would be nice to move, to gain substantial insight (it often happens in math that studying a whole one-parameter family of functions or equations is simpler than studying any function or equation at any specific parameter value)
Both questions are questions about the set of rules and configurations, a space in which it would be nice to move, to gain substantial insight (it often happens in math that studying a whole one-parameter family of functions or equations is simpler than studying any function or equation at any specific parameter value)
How to move in the space of rules and configurations?
How to move in the space of rules and configurations?
The idea to move in the space of rules and configurations seems particularly attractive if we find some 'exciting initial point'... and of course this is something that is vastly easier _if there is some continuity_ in the space of rules and configurations
The idea to move in the space of rules and configurations seems particularly attractive if we find some 'exciting initial point'... and of course this is something that is vastly easier if there is some continuity in the space of rules and configurations
This is a fundamental reason why to study continuous models: not only are they easier to make 'physically plausible', we can move (at least with respect to certain directions) and transfer information from one place to another one
This is a fundamental reason why to study continuous models: not only are they easier to make 'physically plausible', we can move (at least with respect to certain directions) and transfer information from one place to another one
What are models we should integrate to our understanding?
What are models we should integrate to our understanding?
Reaction-diffusion models
Reaction-diffusion models
Lenia
Lenia
Back to More General Cellular Automata-Inspired Systems
Back to More General Cellular Automata-Inspired Systems
The two approaches of Von Neumann
The two approaches of Von Neumann
How to recognize the desired set of behaviors if we see them?
How to recognize the desired set of behaviors if we see them?
But if we are to move in the space of things, we hope to not have to do some re-engineering work every time
But if we are to move in the space of things, we hope to not have to do some re-engineering work every time
Intuitively, there should be some inherent 'complexity' that we would see in a 'living' system, and I would hope that we can recognize this intrinsically, at least with some machine learning methods (but intrinsically, i.e. without training on manually-labeled data)
Intuitively, there should be some inherent 'complexity' that we would see in a 'living' system, and I would hope that we can recognize this intrinsically, at least with some machine learning methods (but intrinsically, i.e. without training on manually-labeled data)
In particular, the ideas that seem the most promising (but require substantial improvements to machine learning algorithms) are based on cross-entropy differentials:
In particular, the ideas that seem the most promising (but require substantial improvements to machine learning algorithms) are based on cross-entropy differentials:
If some signal is easier to predict from some perspective than from some other perspective, that could be the sign of some interesting thing going on from the point of view of life
If some signal is easier to predict from some perspective than from some other perspective, that could be the sign of some interesting thing going on from the point of view of life
The most exciting thing for now is the project with Vass and João where we compare the ability of a given architecture to predict things forward in time versus backwards in time (based on ideas from the Arrows of Time in LLM paper)... and where it seems natural to say that _if some nontrivial computation is happening in the system_ then _that system should be slightly easier to predict forward rather than backwards_
The most exciting thing for now is the project with Vass and João where we compare the ability of a given architecture to predict things forward in time versus backwards in time (based on ideas from the Arrows of Time in LLM paper)... and where it seems natural to say that if some nontrivial computation is happening in the system then that system should be slightly easier to predict forward rather than backwards
Another approach would be to take prediction ability differentials based on long-term versus short-term memory: if a system with long-term memory does better, it would seems to suggest that there is memory stored in the system, and that is also the sign that there is some non-trivial computer in the system
Another approach would be to take prediction ability differentials based on long-term versus short-term memory: if a system with long-term memory does better, it would seems to suggest that there is memory stored in the system, and that is also the sign that there is some non-trivial computer in the system
But then, if we have a computer, does it mean that the creature is alive?
But then, if we have a computer, does it mean that the creature is alive?
Arguably a bacteria colony is more alive than a phone, though the latter makes much more complex computations and arguably more intelligent (on human timescales, at least... on longer timescales the bacteria colony could become some very intelligent species and the phone not)
Arguably a bacteria colony is more alive than a phone, though the latter makes much more complex computations and arguably more intelligent (on human timescales, at least... on longer timescales the bacteria colony could become some very intelligent species and the phone not)
How to understand self-reproduction and in particular its mechanisms?
How to understand self-reproduction and in particular its mechanisms?
All life we know and can imagine uses in some way computers to achieve what the self-replicator in Von Neumann's automaton does... but that needs some clarification
All life we know and can imagine uses in some way computers to achieve what the self-replicator in Von Neumann's automaton does... but that needs some clarification
What kind of Turing-completeness do we need for self-replication?
What kind of Turing-completeness do we need for self-replication?
This is something that needs to be determined... if we look at the way Von Neumann made his computer inside the 29-state cellular automaton, it was very much like an idealized computer of that time (and incidentally, like DNA), but now if we look at the game of life, information processing seems to be dealt with somehow differently, using gliders, etc.
This is something that needs to be determined... if we look at the way Von Neumann made his computer inside the 29-state cellular automaton, it was very much like an idealized computer of that time (and incidentally, like DNA), but now if we look at the game of life, information processing seems to be dealt with somehow differently, using gliders, etc.
This includes the existence and resilience of creatures to changes of environments, the presence of universal (i.e. non-trivial) self-replication, the presence of open-ended evolution
This includes the existence and resilience of creatures to changes of environments, the presence of universal (i.e. non-trivial) self-replication, the presence of open-ended evolution
How to recognize interesting complexity?
How to recognize interesting complexity?
All of these things are 'Turing-complete' in some sense, but that is generally not studied in a sense that is meaningful for self-replication:
All of these things are 'Turing-complete' in some sense, but that is generally not studied in a sense that is meaningful for self-replication:
It is not _directly_ because a 3-SAT problem solution verifier can be embedded within the system that there is a universal self-replicator that can be built in the system
It is not directly because a 3-SAT problem solution verifier can be embedded within the system that there is a universal self-replicator that can be built in the system
Similarly, if we study the Kleene's fixed point theorem guaranteeing the existence of quines (self-replicating pieces of code) in a programming language, the proof is indeed based on self-reference and the Y-combinator, but one should have a lucid view on the medium of output to precisely determine what is possible
Similarly, if we study the Kleene's fixed point theorem guaranteeing the existence of quines (self-replicating pieces of code) in a programming language, the proof is indeed based on self-reference and the Y-combinator, but one should have a lucid view on the medium of output to precisely determine what is possible
And relatedly, Turing-completeness for cellular automata was more recently popularized by the Wolfram's classification, which is a reason which appears to be _a priori_ disjoint from the concerns of self-replication
And relatedly, Turing-completeness for cellular automata was more recently popularized by the Wolfram's classification, which is a reason which appears to be a priori disjoint from the concerns of self-replication
Does Wolfram's Class IV correspond to CAs with universal self-replicators?
Does Wolfram's Class IV correspond to CAs with universal self-replicators?
A priori, there should indeed be a link (Turing universality should be somehow universal), but is the link clearly pinpointed anywhere (of course, one should not necessarily expect a rigorous proof, as the conjecture is not very well specified anyway, and we are quite far from proving it)
A priori, there should indeed be a link (Turing universality should be somehow universal), but is the link clearly pinpointed anywhere (of course, one should not necessarily expect a rigorous proof, as the conjecture is not very well specified anyway, and we are quite far from proving it)
All in all, this suggests that there are a number of reasonable questions that should be asked about systems like more basic cellular automata (even the Game of Life), and that gaining insight in those would be valuable for the whole framework (besides obvious intrinsic interest)
All in all, this suggests that there are a number of reasonable questions that should be asked about systems like more basic cellular automata (even the Game of Life), and that gaining insight in those would be valuable for the whole framework (besides obvious intrinsic interest)
And then, we would be able to give some estimate of _how hard it is for life to appear in some space of rules_, which is of course one of the fundamental questions that humans wonder about
And then, we would be able to give some estimate of how hard it is for life to appear in some space of rules, which is of course one of the fundamental questions that humans wonder about
In particular, the question of the Eigen's paradox (how could the error correction mechanism that life on Earth has found develop in such a way that compute and replication becomes indeed possible, given that it is itself a fairly complex system to start with) is part of this class of questions (for life as we know it, at least)
In particular, the question of the Eigen's paradox (how could the error correction mechanism that life on Earth has found develop in such a way that compute and replication becomes indeed possible, given that it is itself a fairly complex system to start with) is part of this class of questions (for life as we know it, at least)
2D nearest-neighbor cellular automata
2D nearest-neighbor cellular automata
Barricelli's model
Barricelli's model
Tierra
Tierra
Chemical reaction networks
Chemical reaction networks
Below are a number of questions that I think are interesting in their own rights
Below are a number of questions that I think are interesting in their own rights
This is what Stas calls 'Turing patterns being Turing complete'... it is indeed remarkble that Turing who built the theory of reaction-diffusion did not ask this question which is central to self-replication
This is what Stas calls 'Turing patterns being Turing complete'... it is indeed remarkble that Turing who built the theory of reaction-diffusion did not ask this question which is central to self-replication
Generally speaking, people who prove something is Turing complete do not do it with any application in mind (in particular not self-reproduction) but for what we care, we actually need to computations to have an output in the same medium as the medium out of which we build the machines... so this
Generally speaking, people who prove something is Turing complete do not do it with any application in mind (in particular not self-reproduction) but for what we care, we actually need to computations to have an output in the same medium as the medium out of which we build the machines... so this
When are 'creatures' worth considering?
When are 'creatures' worth considering?
A weak point of Lenia (compared to e.g. the Game of Life) is that the size of many 'creatures' are actually comparable to those of the size of the 'kernel' used to define Lenia... so it is not clear a priori we get out more than what we put it in terms of complexity... we want to see some _emergent complexity_ (this is the kind of things that arises quite obviously if we have something that is Turing-complete in a reasonable sense of the term, but we are not even there for Lenia
A weak point of Lenia (compared to e.g. the Game of Life) is that the size of many 'creatures' are actually comparable to those of the size of the 'kernel' used to define Lenia... so it is not clear a priori we get out more than what we put it in terms of complexity... we want to see some emergent complexity (this is the kind of things that arises quite obviously if we have something that is Turing-complete in a reasonable sense of the term, but we are not even there for Lenia
It would make sense that we could say we are on the right track if we see creatures that can 'store some information' like the creatures that Vass found with the two dots on their back
It would make sense that we could say we are on the right track if we see creatures that can 'store some information' like the creatures that Vass found with the two dots on their back
Still, then we would need to find some ways to make the information interact with itself and with other stuff around, but one thing after another
Still, then we would need to find some ways to make the information interact with itself and with other stuff around, but one thing after another
Can we make precise sense of Wolfram's conjecture?
Can we make precise sense of Wolfram's conjecture?
A naive attempt to prove Wolfram's conjecture could look like this:
A naive attempt to prove Wolfram's conjecture could look like this:
Re-define 'stationary' and 'periodic' (classes I and II) as 'trivially predictable' and 'chaotic' (class III) as 'trivially unpredictable', using two machine learning models, a 'trivial' one and a 'nontrivial' one... classes I, II, and III would be classes where there would be no difference between 'trivial' and 'nontrivial' (in the cases I and II, they would both do well, and the case III they would both do badly)... the class IV would be the cases where there would be a difference
Re-define 'stationary' and 'periodic' (classes I and II) as 'trivially predictable' and 'chaotic' (class III) as 'trivially unpredictable', using two machine learning models, a 'trivial' one and a 'nontrivial' one... classes I, II, and III would be classes where there would be no difference between 'trivial' and 'nontrivial' (in the cases I and II, they would both do well, and the case III they would both do badly)... the class IV would be the cases where there would be a difference
Then in that class, there would need to be information moving around, from which one could construct a computer, etc...
Then in that class, there would need to be information moving around, from which one could construct a computer, etc...
Coarse-Graining, Renormalization, and the Information Layer
Coarse-Graining, Renormalization, and the Information Layer
There were a lot of discussions about renormalizations in the past months in the group, and I think this naturally goes hand in hand with the question of the information layer
There were a lot of discussions about renormalizations in the past months in the group, and I think this naturally goes hand in hand with the question of the information layer
What is a good coarse-graining, a priori?
What is a good coarse-graining, a priori?
In statistical physics, coarse-graining is often done with rules like the majority rule, and the reason for this is quite natural: because what is deemed interesting is what happens to matter _on average_ (the macroscopic observables exist as averages of microscopic ones)
In statistical physics, coarse-graining is often done with rules like the majority rule, and the reason for this is quite natural: because what is deemed interesting is what happens to matter on average (the macroscopic observables exist as averages of microscopic ones)
Observables relevant for life, on the other hand, are less about blind averages of stuff: the _structure_ of things is probably more relevant than the amounts or proportion of things
Observables relevant for life, on the other hand, are less about blind averages of stuff: the structure of things is probably more relevant than the amounts or proportion of things
If we try to 'summarize' what happens in a space-time cell of a dynamical system (from a coarse-graining perspective, that is what we would want), then we would like to have is to find the appropriate balance between the following objectives:
If we try to 'summarize' what happens in a space-time cell of a dynamical system (from a coarse-graining perspective, that is what we would want), then we would like to have is to find the appropriate balance between the following objectives:
Have a coarse-grained dynamics that is maximally deterministic: the coarse-grained world should extract degrees of freedom that are as relevant as possible to describe their own future
Have a coarse-grained dynamics that is maximally deterministic: the coarse-grained world should extract degrees of freedom that are as relevant as possible to describe their own future
Have the coarse-grained states be as informative about the microscopic states as possible (this is a bit vague, but one could require that as few bits of information on the microscopic states are lost, irrespective of the 'importance' of the said weights)
Have the coarse-grained states be as informative about the microscopic states as possible (this is a bit vague, but one could require that as few bits of information on the microscopic states are lost, irrespective of the 'importance' of the said weights)
Have the coarse-grained system be as simple as possible, i.e. be describable in as few bits of information as possible
Have the coarse-grained system be as simple as possible, i.e. be describable in as few bits of information as possible
It's obvious that if we pick any two of these three and optimize for them regardless of the third, then we will get something that will not be very good (or very bad, like if we optimize for the first and third, we get just a trivial system)
It's obvious that if we pick any two of these three and optimize for them regardless of the third, then we will get something that will not be very good (or very bad, like if we optimize for the first and third, we get just a trivial system)
As Bara noted, the idea of a kinematic automaton equipped with a tape is mentioned in the book of Burks on cellular automata... at the same time, the notion of tape there is really to be understood as blueprint (and there is not the question that the components would be basic in the same sense as the pieces from which the cellular automaton Turing machine is later constructed)... it is somehow assumed that a 'floating computer' can be found as one of the pieces (at least from my reading)...
As Bara noted, the idea of a kinematic automaton equipped with a tape is mentioned in the book of Burks on cellular automata... at the same time, the notion of tape there is really to be understood as blueprint (and there is not the question that the components would be basic in the same sense as the pieces from which the cellular automaton Turing machine is later constructed)... it is somehow assumed that a 'floating computer' can be found as one of the pieces (at least from my reading)...
In physics, one would typically focus on the last two, and that coarse-grained picture will typically end up reasonably predictable in the context of standard thermodynamics...
In physics, one would typically focus on the last two, and that coarse-grained picture will typically end up reasonably predictable in the context of standard thermodynamics...
However, it can be argued that if we do this kind of things, many phenomena associated with life will turn out to be very surprising; a striking feature of life is that if we know only the few bits of DNA information of a cell ('few' is relative to the total information to describe the matter in the cell), one can predict a lot about the macroscropic structure of the resulting creature; that information turns out to be very relevant at a macro-scale, if the cell is to grow into some sizable creature
However, it can be argued that if we do this kind of things, many phenomena associated with life will turn out to be very surprising; a striking feature of life is that if we know only the few bits of DNA information of a cell ('few' is relative to the total information to describe the matter in the cell), one can predict a lot about the macroscropic structure of the resulting creature; that information turns out to be very relevant at a macro-scale, if the cell is to grow into some sizable creature
Is a good coarse-graining the same as extracting the information layer?
Is a good coarse-graining the same as extracting the information layer?
The claim is that some coarse-grainings are _particularly adequate_, i.e. they are some sweet spots in terms of description; at some scale, we end up with a description that is really a useful summarization of what happens at the lower scales (e.g. if we describe an ecosystem in terms of each of its animals and plants, this is a probably more reasonable description than describing each of the organs of each animal in the system)
The claim is that some coarse-grainings are particularly adequate, i.e. they are some sweet spots in terms of description; at some scale, we end up with a description that is really a useful summarization of what happens at the lower scales (e.g. if we describe an ecosystem in terms of each of its animals and plants, this is a probably more reasonable description than describing each of the organs of each animal in the system)
If there is an information layer, I would tend to think that any good coarse graining should reflect its content (this is a bit subjective, but of course that if we design a system with an information layer, its degrees of freedom through coarse graining should be part of it), because the bits of information in that layer are exactly instrumental to describe the interactions between the renormalized cells
If there is an information layer, I would tend to think that any good coarse graining should reflect its content (this is a bit subjective, but of course that if we design a system with an information layer, its degrees of freedom through coarse graining should be part of it), because the bits of information in that layer are exactly instrumental to describe the interactions between the renormalized cells
How to grow a good coarse-graining?
How to grow a good coarse-graining?
I have the slightly strange intuition that it is easier to make good coarse-grainings than to define what good coarse-grainings are, or more precisely that it is easier to define an algorithm that will approximate a good coarse-graining than to define an explicit function that quantifies the quality of a coarse-graining
I have the slightly strange intuition that it is easier to make good coarse-grainings than to define what good coarse-grainings are, or more precisely that it is easier to define an algorithm that will approximate a good coarse-graining than to define an explicit function that quantifies the quality of a coarse-graining
This is would not be a particularly weird situation: diffusion models are great at generating plausible-looking images, but it does not mean that we can explicitly define a likelihood function on the set of images
This is would not be a particularly weird situation: diffusion models are great at generating plausible-looking images, but it does not mean that we can explicitly define a likelihood function on the set of images
This would involve starting with the trivial coarse-graining and growing it, by adding degrees of freedom that help either to inform the fine structure or that help make the coarse-grained dynamics more deterministic (i.e. that help improve the quality of the predictions of the other degrees of freedom of the coarse-graining)
This would involve starting with the trivial coarse-graining and growing it, by adding degrees of freedom that help either to inform the fine structure or that help make the coarse-grained dynamics more deterministic (i.e. that help improve the quality of the predictions of the other degrees of freedom of the coarse-graining)
Statistical Mechanics View on the Questions
Statistical Mechanics View on the Questions
After discussing with Jordan, a promising rephrasing of many questions could be in terms of 'typical' behaviors, or, taking a statistical mechanics framework, in terms of probability measure on the space of rules
After discussing with Jordan, a promising rephrasing of many questions could be in terms of 'typical' behaviors, or, taking a statistical mechanics framework, in terms of probability measure on the space of rules
January 15th, 2025
January 15th, 2025
For instance, the question of the Wolfram's classification should perhaps be: if we have a typical rule (sampled from a probability measure), and from typical random configurations, we neither see something that is obviously ordered nor something that _looks_ chaotic, is it reasonably obviously Turing complete
For instance, the question of the Wolfram's classification should perhaps be: if we have a typical rule (sampled from a probability measure), and from typical random configurations, we neither see something that is obviously ordered nor something that looks chaotic, is it reasonably obviously Turing complete
The question is not e.g. whether it is possible to find rules that are Turing-complete that look chaotic on most configurations, it is obviously possible... the question is _how likely are we to see these things naturally?_
The question is not e.g. whether it is possible to find rules that are Turing-complete that look chaotic on most configurations, it is obviously possible... the question is how likely are we to see these things naturally?
Similarly, if we find that a random set of rules is Turing-complete, the question then becomes whether it is possible to construct a universal self-replicator (there are examples of things that are Turing-complete without having self-replicators, but are they really likely to be found?)
Similarly, if we find that a random set of rules is Turing-complete, the question then becomes whether it is possible to construct a universal self-replicator (there are examples of things that are Turing-complete without having self-replicators, but are they really likely to be found?)
What is interesting about Von Neumann's construction is that the 29-state automaton is reasonably low in terms of complexity and that we can construct some universal self-replicator using it... and this suggests that this is also possible with many types of rules
What is interesting about Von Neumann's construction is that the 29-state automaton is reasonably low in terms of complexity and that we can construct some universal self-replicator using it... and this suggests that this is also possible with many types of rules
Wolfram's conjecture was made for 1-D nearest-neighbor cellular automata, which is a small enough space so that there may not be clear counter-examples, but overall, it is clear that if we take a space of rules that is large enough there can be counter-examples... the question is instead whether it is _easy to stumble on such counter-examples_ and the answer is likely not
Wolfram's conjecture was made for 1-D nearest-neighbor cellular automata, which is a small enough space so that there may not be clear counter-examples, but overall, it is clear that if we take a space of rules that is large enough there can be counter-examples... the question is instead whether it is easy to stumble on such counter-examples and the answer is likely not
Drake's Equation for Cellular Automata
Drake's Equation for Cellular Automata
Of course, the original Drake's equation is a bit of a joke (it doesn't help with anything), but somehow what is possibly quite interesting is to ask a similar question not for a random planet, but for a random collection of rules in a cellular automaton
Of course, the original Drake's equation is a bit of a joke (it doesn't help with anything), but somehow what is possibly quite interesting is to ask a similar question not for a random planet, but for a random collection of rules in a cellular automaton
What is the chance that a random collection of rules is 'obviously periodic', vs 'a priori chaotic'?
What is the chance that a random collection of rules is 'obviously periodic', vs 'a priori chaotic'?
If if is neither, what is the chance that we can find in it ingredients that suggest it is Turing-complete?
If if is neither, what is the chance that we can find in it ingredients that suggest it is Turing-complete?
If it shows this ingredients, what is the chance that it actually can be shown to be Turing-complete?
If it shows this ingredients, what is the chance that it actually can be shown to be Turing-complete?
If it can be shown to be Turing-complete, what is the chance that universal self-replicators can be found?
If it can be shown to be Turing-complete, what is the chance that universal self-replicators can be found?
If universal self-replicators can be found, what is the chance that they would emerge naturally? (that's a little bit Eigen's paradox question in the case of life as we know it)
If universal self-replicators can be found, what is the chance that they would emerge naturally? (that's a little bit Eigen's paradox question in the case of life as we know it)
.
ideas-and-notes
about
tricritical-ising
cellular-automata-and-alife
ising-and-e8
xent
chiral-spin-field
computational-equilibrium
misc-ideas
arrows-of-time
de-finetti
local-vs-global-univ
interestingness
quines-and-self-replicators