Artificial life software engineering in ulam 2

Today I want to talk about software engineering for artificial life, and in particular, software engineering with the goal of getting artificial life style computations to be able perform useful computations in the real world. This is.. well, it’s sort of overdue to just sort of have one of the research notebook videos. Also, Robert Golosynsky, in one of the comments below, was saying I shouldn’t just be posting these talks that I do at conferences, but I should actually be talking directly to y’all, and not freaking out about the time limits so much.

So I just want to say, you know, be careful what you wish for, because this is probably going to go on a little bit. And in the middle, it’s probably going to nerd out pretty hard for a little while, because I actually want look at some code. All right. So the plan is, first, to get a running start, sort of ‘Previously on’, and then talk a little bit about where we are now. And I feel like I really haven’t tried to state that as clearly as I sort of think it is in my head, so I want to take another run at that; talk a little bit about ‘ulam 2’, the new version of ulam. We announced ulam 1 — was just called ‘ulam’ — at ECAL at York in England, a year ago and a couple of days, so this is almost an anniversary. And show a few demos, and in particular, show a demo of a fairly substantial example that I just sort of got working in the last few weeks, which is, you know, I think a little bit badass, actually.

And take a look at that, and then sort of wrap up with wishin’ and a hopin. So, if you’re just joining us, the way we do computing today is based on this idea of hardware determinism. You make the physical hardware provide absolute repeatability — as good as you possibly can, so that same inputs, same program: same output guaranteed. And of course you know in the real actual world nothing really works like that.

Everything’s got a few little errors and rubs and things go off in some weird way. But digital computer hardware is designed and manufactured to control that, at least as long as it takes to do whatever the computation is. And that’s worked really great, but it has also sort of gotten us into this bad situation where for the guys are doing software, they just get to think that reliability is taken care of by magic.

And the job of the software guys is to just be as efficient as possible in getting whatever done one wants to get done. The consequences of that is that at the software level there’s essentially no redundancy, no error checking, nothing that you would do if you couldn’t rely on an absolute ironclad guarantee from the hardware that everything was going to be deterministic — exactly the same as expected. The world that we are living in today, with all of these crazy security faults, and data thefts, and breaches, and everything getting busted into — that is a symptom not so much of the fact that programmers are stupid, or the companies ship crap — although both of those things might be true in individual cases…

But really it’s more a symptom of the architecture, the fundamental way that we architected computers — based on this idea of hardware determinism, efficient software — is broken. And it’s fun to program in it, because you are the utter master of everything that happens. The memory, the RAM, is just completely passive. And you say ‘Put a 1 there’, ‘Increment that variable’, ‘Test if this variable is bigger than that variable’, and so on. And it does it: Yes sir; Yes sir; Yes sir, like that. And, you know, from.. once you start to get little good at programming, you know, it really is quite a heady feeling. You can make it do exactly what you want — You are the Master of the Universe. You control everything. Granted, it’s a fairly small universe — but it’s yours, and it’s great.

But it doesn’t scale. It’s the dream of dictators everywhere: That if I can just get everybody everywhere to obey me exactly, everything would be fine — and it never is. And these attacks that we are having, these computer bugs and viruses, all of this stuff that’s happening, is because the whole way we’ve been set up is this sort of totalitarian government, where not only do the data members that are keeping track of everything’s going on — not only do they not have any investment in what’s going on, not only are they not allowed to care, to say ‘Wait a minute, I got changed’ — they’re not allowed to do so. They must be utterly passive.

If that was the only way to build computers, well then okay we would live with it. But there is another alternative. This is what I’m calling ‘living computation’. And this is where we don’t assume hardware determinism. We say the hardware’s going to try to do what it says, but it reserves the right to make mistakes sometimes. And software going to have to be organized so that it can tolerate some amount of mistakes, and still get useful work done.

Now that means the software is no longer going to be able guarantee to get it everything absolutely perfect, but that guarantee was only as good as the hardware guarantee, which was only good up to an asterisk anyway. In exchange for saying that software is going to have to deal with redundancy — taking care of checking its work if it’s got a little spare time, and so forth — what we get from that is that the job of hardware is to be able to be plugged together as big as we absolutely want to.

Need more hardware, just plug more hardware in, and never run into a limit where, well, you know, sorry, we ran out of address space, we got no more addresses, whatever it is. Indefinite scalability is the job of hardware, to be able to compute as large as we need. And given that we can no longer assume that we are in this totalitarian ‘Everybody Will Obey Absolutely; instead, we are going to let responsibility flow downwards toward the individual agents, by making methods where they can say, “Well, you know, I don’t know the big picture, I don’t know what’s really going on, but I have enough information to know that it would be better if this thing was over there, so I’m going to move it over there, bup bup.” Member of the Team versus Master of the Universe.

And the difficulty is.. And so the argument I’m trying to make — for eight years now, or, in a broader sense, for thirty years or more — is that this living computation style, where you know, you don’t require everybody to be perfect, and you get along, and you make things better — is a better way to do manufactured computing than the way that we’ve been doing it. But getting from A to B, getting from the deterministic attractor to the best-first attractor, is really hard, because there’s zillions of decisions that we made in the process of coming up with the deterministic attractor, that all reinforce each other.

And if you try to change just one of them, then it looks much worse. You try to change two of them, it looks much worse still. So the idea is we’re going to escape from determinism in two or maybe three steps, depending on how you keep score. The goal is: Using artificial life technology — things where, you know, software programs are reproducing, they’re healing, they’re growing, they’re having kids, the kids are moving out of the house… All of that. In service of useful computations. Perhaps driving cars, one day — nowhere close now, but not sure exactly how close the software in the traditional computers are is either. So how can we do it when you have to change everything at once, to leap out of one valley and get into the other? You can do it with a very small strike team that makes an expedition to other attractor and starts to set up a colony.

And that’s what we’ve been doing for the last several years. The steps in building a colony is: Define an indefinitely scalable architecture — this way of hardware that won’t guarantee to absolutely be correct, because correctness is not even going to be well-defined, but do everything necessary so that if you have real estate, power, money and cooling, you can buy more of these things and plug them together and make a computer as big you want, from here to the horizon. We’ve done that, the one that we’re doing is called the Movable Feast, the Movable Feast Machine, M-F-M. Create a programming language that isn’t counting on global determinism to deal with it. Now, there’s lots of languages, existing languages, that one could talk about, that might get pulled in to this. But we made our own, for good reasons mostly, and mostly because we wanted to, because again, you adopt an existing thing from the correct-and-efficient attractor, it starts to suck you back in. We need to reevaluate every damn assumption that goes in to programming languages and so forth. So even though ulam, and ulam 2, looks quite familiar, from the surface — we designed it that way to try to make it a little less terrifying to imagine, you know, ‘Do I want to take a trip to the colonies?’ It’s got some weird features; it”s got some fundamentally weird features, like, you know, unary numbers, for example.

You don’t find that in many languages. In some ways it’s closer to a hardware description language than to a traditional programming language, because in a way we are lifting a lot more of the tasks that hardware design traditionally would do, we’re lifting them up to the software level, making the hardware level just more uniform, and indefinitely scalable. Develop simple tools and techniques, that’s what we’ve been doing all along. We’ve got these demos, you’ve seen some of them on the YouTube channel, that do various things. We’re continuing to make those. We’ll look at some in a second. But then the next step, is to go beyond just playing with the tiny little things, and start building more advanced tooling, more, you know, sort of factory stuff, to start settling down. To actually bring in some people that are not there just because they want to be explorers, but to ,,be there because it’s going to be useful for them in some way, or at least there’s a narrative, a story, that’s believable enough, about how they’re going to get to something that’s helpful.

And to do that we need software engineering. To do that we need to be able to go beyond tiny little, you know, one atom things that do funny stuff, to say ‘How can we organize complexity to deal with stuff at multiple spatial scales? Multiple temporal scales?” Things running quickly inside of things running more slowly, and so on.. And that’s what we’re starting to do, that’s what I want to talk about today. In the future, in the near future hopefully, for step 5, we want to build a next-generation hardware tile, an indefinitely scalable hardware tile, prototypes. That will be, you know, absurdly expensive and absurdly weak for what they do, but the goal is for them collectively allow us to benchmark the ‘Average Event Rate, Indefinitely Scalable’. The primary metric of these things is not MIPS — millions of instructions per second — but how many events can you deliver to each spot in the matrix that you’re working on, in a given second.

Assuming that some of those things are going to have talk to neighboring tiles so there’ll communicating and coordination involved… taking all of that into consideration, what AER can you show us, what Average Event Rate — events per site per second, on average. I will be thrilled if our next generation hardware can show 10 AER. I mean, I’ll be satisfied in can show 1 AER, because we’re just drawing a line in the sand. I’m not a hardware guy; I need hardware help. The point is to draw a line in the sand and say you know, it’s starting to look like this architecture could actually be useful for something, I bet these computer engineer guys — especially because now they’re liberated from from absolute determinism — could just crush this. They could be reaching 10 AER, 100 AER, who knows? And then, with that prototype hardware — less and less prototype, more and more useful as time goes by hopefully — we start to actually be able to do system control demos. Take a bunch these guys and have them become the sort of skin and brain simultaneously of a little robot, that kind of thing.

That’s down the road. OK, so today, more about step 4. Oh and then step 7, you know like Linus says “World domination”, or as I would say, you know, “For the benefit of society.” All right. So. Ulam 2. Announced July 30th, announced today, at least I’m recording on July 30th, we’ll see when we get through post and so forth. There’s, so it’s ulam 2, there’s been a bunch of little releases just trying to get the packaging working, and MFM 3.3.4, might end up being 3.3.5. The easiest way by far is to use the personal package archive and install it on Ubuntu — any of the current long-term support, 12.04, 14.04, 16.04, on an Intel box, 32 bit or 64 bit. The more machine you’ve got, the better. If you’ve got an 8 core burner, well that would be great, but it’ll even run like.. Is this thing running? Ah there we go. So right can you see this? This is my little ancient EEE PC, I think it’s the first generation 701 or whatever the heck it is.

It’s running the Burn demo, which is one of the demos which is included in the ulam package which you get if you install.. “Install now!” And it’s running like, it’s running like crap, but it is running. The simulator will take advantage of as many cores as you’ve got, but it’ll fall all the way, and things’ll just get slower and slower and slower. And so let’s look at that demo actually. One of the programs that gets installed is ‘mfzrun’. ‘MFZ’ is the packaging format — like a JAR or a zip, in fact it’s based on zip — that you can put together with ulam code, compiled code, initial kernel configurations that you want to do. If you just type ‘mfzrun’, you get a bunch of help, and at the very end you the list of the demos that are included.

It sticks ‘/usr/bin’ on it doesn’t really need to. Here’s the Burn demo, that we just looked at on the EEE PC. It works here too. I was going to show you another one. Let’s look at BaseLayer.. I guess it’s got a cap in it.. yeah it does. All right. So this one. Look at this. We’ve got a bunch of these TouchReporter guys sort of in the middle, and they are sensitive to mouse input. And this is just a tiny little downpayment on how, you know, the Movable Feast grid is two-dimensional, but it has the potential to interact in the third dimension, though the Base layer, and there’s a library ‘SiteUtils’ in that you can write code.

So this thing is detecting mouse motion. It can detect dragging. It can detect clicking, although you notice clicking is not necessarily hitting one exact site. It’s more like imagining sort of a tablet with a finger, but also, the point is, we need to come up with things that will figure out how to debounce that kind of information ourselves, rather than just assuming we’re going to magically be able to hit the one pixel, the one site that we want to do. Also in this directory is the SiteRiter demo– there’s a video of that on the channel — where, you know, is it starting to show ujp yet? It’s starting to, there.. Where you know, it gets colors scribbled all over the screen and, even though there’s only two atoms there, the colors get scribbled everywhere, and how is that possible? The way it’s possible is that is because the colors are actually being painted onto the Base layer, so the atom can move on and leave an effect in the world.

This could be used for pheromone trails if one is doing an ant model, or a limited sort of communication, or a little bit of extra state, that’s accessible from the site that you’re in. And the reason we can see it is because in the full-on version of the simulator.. This is where you get the interface, if you dare, you click on ‘More>>’ you get the whole thing. ‘Back’ what are we going to display in each little rectangle for the site; ‘Middle’ is what we display in a circle inside that, and ‘Front’ is what we display in a little teeny dot in the center, so we can pick whatever we want for all those three layers. In this case we’re picking the Site Paint to see what the SiteRiter atoms are doing. Now again, with of these demos, it’s really just the simulator running from a particular initial configuration, so if we want to mess with it, we can do that. We can add a bunch more SiteRiters and see what happens. We can take SiteRIters and a bunch of TouchReporters and so on and so forth. OK? So. Man, I told you this was going to go on a while..

So we’ve got all these demos. And they’re great, and they serve, in addition to being fun to play with and fun to look at, what they’re doing is they’re training our eyes for the kinds of dynamics that happen naturally in these sorts of systems. I mean a lot of these demos you can find in like NetLogo, other packages meant for agent-based modeling, and so forth. It’s all very similar concepts. And they’re worth looking at just to see.. I mean like in the Burn demo in particular. In a classic burn thing, you know, you can get a loop set and it’ll just burn around and around and around forever, which is sort of weird to think about burning. But it’s as if you’re imagining you have a Circular Forest, and the forest is burning, and it takes so long to get around the circular forest that the forest has regrown and is burnable again once the time it gets back.

And you can get these ring oscillators going. And it’s a model of things like transmission of signals in neurons in your brain. It’s related to how your heart works, the cardiac muscle pacing, and so on. All of those things are good to teach us what the sort of basic resources of our colony are like. But once we’ve played with a bunch of them, we need to start saying okay, well how can we build larger tools with them, and that’s where software engineering comes in.

And that’s where several of the new features of ulam 2 come in. So ulam 1, where things have been carried forward, it’s just a regular-looking programming language. I mean, if we look, for the sake of argument, if we look at.. let’s look at SiteRiter. When we install the package everything ends up in /usr/lib/ulam. Let’s see, in the ulam side, share/ulam/demos, so there’s our demos. The SiteRiter is in the BaseLayer, like that. So, so here’s the whole thing. We have a data member which is of type ARGB, which is a four element array where each element is an Unsigned(8). So eight bits, an array of four groups of eight bits, 32 bits total. The ‘behave’ method is called automatically by the engine when an element, an atom of type SiteRiter gets a turn to have an event, and it just pulls up its color, modifies it by a random number, and writes it back. Now you might think, ooh, what’s going to happen when we have wraparound? What if I add -2 and the color is only 1, it’s going to wraparound to 255 and the color is going to completely change.

Well, no: One of the unusual features in ulam is that arithmetic is saturating, so 1 minus 2 is 0 — if it’s an Unsigned, which it is in this case.. although we can’t really tell that.. we’d have to go look in the SiteUtils, which is one of the standard library packages, in order to find out what the type of Channel is — but as it happens, it’s unsigned. So we modify the color, pinning at black and white if need be, and then we use the SiteUtils package to paint it on the floor, and then finally we swap ourselves with..

State 1 is the guy to the west — site 1 — we are site 0, site 1 is the guy to our west. So that would mean sort of go west, except for the fact that we have, where is it, there it is, we have some metadata declaring that this thing symmetric, it’s rotationally and mirror symmetric. Which in the case of the Movable Feast means, every time this guy has an event, pick among its legal symmetries at random and apply it. So in this case, west might actually be north or what have you. OK? So ulam, ulam 1.. from one point of view, it’s a very conventional-looking language. It’s got objects, it’s got methods, it’s got data members. They’re weird data members.. Primitives and objects are allocated by the bit; arithmetic saturates. And then the biggest pain of all — which is the whole point — is that there are no pointers, and there’s no random access memory. There’s just the tiny number of your neighboring sites; we looked at 0, which is where I’m stored; we looked at 1, which is one site away, depending on the symmetry, and so forth.

And so, fundamentally, ulam is a method programming transition rules, like for a cellular automata like the Game of Life. Except for, relative to standard cellular automata, much bigger neighborhood and much more particular symbols per state, for each one. But unlike traditional programming languages, you can’t take a pointer to anything. There is a stack, and you can have functions that call functions that call functions and return, and the stack is assumed to be ample to do serious programming. You can have local variables in stack frames — methods that have been called — but you can’t take pointers to them, you can store them anyplace. The only persistent memory you have available to you is your event window: that little neighborhood, plus the base layer underneath you, the site paint and so on.

In ulam 2, we add single inheritance, like Java, and virtual functions like Java and C++, and we even add reference variables — but they’re limited. You can’t have a data member that’s type reference, because it doesn’t actually really make sense. All of the persistent memory is out in the event window, those things do not have addresses, so you can’t actually take reference to them. When you’re executing on the stack and you’re doing locals inside a function, you can, because of that stuff is going to disappear as soon as the event is done. The stack unwinds, the event is done, some other guy gets called on his behave function and off he goes. One of the most aggravating things, which again is fundamental to the mission, is that all objects in ulam 1 are the same size, well, they’re all fitting inside an atom, which is 96 bits, minus a bunch that are used for type information and error-checking, so you — the ulam programmer — only get to use 71 bits for an object.

And that’s still true, for anything that’s going to be peristent, but we now have a notion of a ‘transient’, which is a struct that acts just like class pretty much — you can have transients inherit from transients and so forth — that can be much larger. They can be a kilobyte in size, if you want. Eventually there’ll be a limit on stack memory, but you know, it’s, again, imagined to be reasonably ample to do software engineering with. So you can have.. and the keyword is ‘transient’ to drive home the fact that they only exist from one entry to a behave() function until the time that behave() function leaves. And you only get to do that for one event.

Still no pointers, still no RAM. OK. So we looked at these little guys, we looked at some of the little demos, but I want to talk about a bigger one, sort of a more of an example of software engineering, that I really just got going, which is ‘cell-1.0’. Now, you know, it’s anything but a real sort of biological cell, so that name should be taken with a salt lick. But the mission of this exercise was to build something, some mechanism that could replicate a relatively large object. Where ‘relatively large object’ means dozens or hundreds of sites maybe, but not necessarily thousands or millions. Since everything is small we have to size things as we go. But we want to be able to say OK, here’s a pattern, and you go wham, and something happens, and it takes some time, and then we’ve got two of them, and they’re pretty much the same — with best-effort results, they are the same.

Something might go wrong, but assuming nothing does go wrong, they’ll be copies. In order to do this, there’s, you know, even though we are programming, we’re just programming these individual little transitions, one at a time, and they have to compose together, by noticing that ‘Oh, in my event window look I’ve got some blocks here that they’re content, they’re the kind of thing that I want to move.. Oh, there’s a thing I don’t care about; I’m going to ignore that.’ and so on. Have to code all that up, given that events happen asynchronously. There’s no absolute addressing.

We cannot say that w ell just give me the contents of location (26,34) because there is no (26,34). I’m at 0, I’m the center of my event. If I move and get another event, I still think I’m 0, even though it’s different than the 0 I had on the previous event. When we’re starting to talk about objects, we’re meaning now collections of atoms, like a molecule or cell, and when you start dealing with this, when you actually start programming it, you’ve very very basic stuff. Like how do you tell whether an atom is considered part of the object or not? And worse, when you’re starting to replicate, how do you tell whether this particular atom is part of the parent or part of the kid? You need to tell it apart, because different things happen to them, once the kid separates from the parent.

So that’s the problem of distributed control . One of the first things we had to deal with is how do you die cleanly? I mean, you know, when you look at a movie, you know like Tron or Matrix or whatever it is, and somebody gets killed inside the digital world, like you know they fall into these bits and the bits vanish, you know, which is very convenient from a point of cinema, you know, instead of having, you know, one pile of half-cooked bits over here, and another pile of half-cooked bits over here, like you’d have when someone got chopped with a sword in real life. But how the heck is that supposed to happen? I mean, all of those things representing that thing, they’re distributed in space, they’re made of zillions of things that are operating independently.

Object identity — clean living, clean dying — basic basic stuff that comes up in this code. Distributed control, and then finally, the software engineering, you know, readable code that there’s hope for it could be reused either literally or with modifications, for other purposes. And again, in traditional cellular automata stuff that really doesn’t come up, because the entire control system is imagined, number one, to be really part of the physics rather than in this programmable layer, and all of the work is in the layout of the pieces in the grid. But now we’re taking a lot of that down.. it’s on the same idea that, you know, you could build computer — and this is as Peter Corey? mentioned in the comments recently..

Some Corey — you could take the Movable Feast and you could implement a NAND gate out of it, and then you could have it build up a traditional von Neumann machine on top of it, so isn’t that kind of weird? And that’s absolutely true. But we don’t make computers, we don’t make actual normal computers today out of NAND gates, you know, we make them out of more complex things, because it’s more useful. And here we’re going to do the same thing. Rather than saying that the goal is to find The Minimum Set — two symbols, neighborhood of four, and so forth — that can lead to a certain behavior, here, we’re deliberately giving ourselves more room to engineer. And that does not make everything sort of magically easy, because we still have the fact that it’s asynchronous, the object is bigger than the neighborhood, we have to schedule all this, we have to coordinate all this, without determinism or synchronous updates.

OK. So how are we going to solve these challenges? We’re going to tell who’s in and who’s out by having a tag, and if you have the same tag as me, then we are in the same object. You have a different tag, you’re not. We’re going to put that tag up in a base class, which is actually going to be a template base class, so we can say how many bits we want to dedicate to the tag. In this demo, we have five bit tags. And that’s going to determine how we’re going to kill things, and so forth. And.. Well actually, let’s look.. let’s look.. we’ll come back to that in a minute. Oh man, it’s like a half an hour already. This is going to go forever.

All right. So here’s the way that the solution builds up. Suppose we take one of these guys. We plop it down, and it’s a little, I think sixteen site line, and it heads east. We can make a bunch of them, and when they run out of the edge of the universe, they go away. And I developed these things, which we call them ‘SwapLines’, for a, to do a little demo in a paper that’s going to be coming out in the Artificial Life journal Real Soon Now. And it took me the longest time to realize that.. so what these lines, what these guys do is they wait and make sure none of their neighbors are behind them — that the neighbors are all caught up — and then they swap themselves forward once. And so that way, even though the line isn’t completely straight, it never gets more than 45 degrees, it never actually tears, OK, because everybody at the front waits until the back man catches up. All right? It took me the longest time to realize — let’s make something here — that. okay here’s a thing.. that if the SwapLines are swapping to the east, that means anything that they run into is actually moving to the west.

Like that. And this idea means that perhaps we could use SwapLines to come up for a method of ‘large’ — where this is large now, bigger than an event window.. We can actually see the event windows if we want.. there. Let’s get a line going. Yeah. So those.. I don’t know if you can see it; the little diamond guys here, that’s the size of the a single event window. And it’s flashing around because the events are happening at those various places, and so on like that.

We can get rid of.. We could use the SwapLines to move large objects incrementally. And how to move a large object in a cellular automata has been a bit of a challenge for quite some time. And that’s why, you know, in Conway’s Game of Life, the glider — this tiny little configuration of that moves — is so celebrated, and then the larger spaceships and tractors and various things they have, that move while retaining their pattern, are all sort of inherently interesting. One problem again, with the sort of deterministic, everything out in the world, nothing.. minimal amount in the ruleset.. is that those things are extremely limited about what they can do. You cannot make a modified glider that has six guys in it, or carries a kind of configuration. All of its shape is used for the dynamics of moving and repeating in its pattern.

Whereas here, in principle, you know, we can change the shape, oops, we change the shape, we could add other stuff to it, and if we could figure out a way to release a bunch of SwapLines in front of it in some coordinated way, we could move the whole thing. OK? So the problem of large object motion, and a lot of people have taken cracks at this over the years..

You know, this would be one way to approach it. And the thing that, again, as you sort of think about the design stuff, the reason the SwapLine is helpful for this is because it’s not completely synchronous, because the world is not completely synchronous. And the world can’t be completely synchronous if it’s going to be indefinitely scalable. But it’s a little bit synchronized. Thinks wait for the back man to catch up. So that as long as the line haven’t actually gotten torn or damaged in some way, we can make certain assumptions about it. That if we’re not at the end of the line then we’re going to see a guy before me, next to me, or in front of me, and those are the only three possibilities. So I can tell what’s going on locally.

So a SwapLine is an example a little bit of synchronization, and what we’re trying to do is, rather than assume we can have the architecture take care of synchronization for us, and always do, you know, KaChunk KaChunk KaChunk KaChunk. The idea is to just the amount of synchronization that we need to get the job that we’re trying to do done. And by keeping that limited stuff, we keep ourselves open to be able to apply to different shapes, in different circumstances, with different ‘inputs’, in effect. We get more general, more flexible mechanisms, if we limit the amount of synchronization that we need to just when we need it. OK? So I took this idea of the SwapLine.. and once we have this.. so now we’ve got a guy who’s moving. Why couldn’t we like as we’re moving him, why couldn’t we like make a copy of his last line and leave it behind him? And then the next time he moves, make a copy of next-to-last line, and leave that there as well, and sort of make a on-the-wing replicator.

And that’s what I did. So here’s an example. These are Blocks, and Blocks have a tag, a Content tag. This guy is hex 17 — again, we have five bit tags available, like that. Now, by itself, Block doesn’t do anything. But let’s pick, let’s see, let’s go east. Plop one of these CPlates in here. Oh, and I blew it again. The interface could use some work. Let’s get rid of this guy. All right. So, a lot is happening here, already. When we put down one of these things, what happens is it circumferentially.. circumferences.. it plates the whole object — all around, out to a depth of two.. This particular atom is called CPlate, because that’s what it does, it does circumferential plating.

And it’s got a ton of stuff in it — Whups. A ton of stuff in it. Whups, and I’m putting that.. pull that in — that we’re not going to really look at here. But among the many purposes that CPlate serves is: It isolates the object that’s going to be copied; it is used to establish a absolute addressing grid, from the lower leftmost point that is in the bounding box of the object, to the upper right point of the bounding box of the object. And then each of the CPlates within the entire collective localizes themselves relative to that grid. So now, and that’s what’s happening now. And so — whups, and then once the grid has actually stabilized, we move on to phase two, which is the actual copying step, that uses like the SwapLines, except now the SwapLines are like tractors and each of them has a trailer going on behind, so it’s moving two..

Moving the thing two steps at once. And when it gets to the column whose job it is to copy, it replaces the trailer with a modification of the thing that it’s copying. And then when it gets to the back, it dumps it off. So let’s just let this finish, for now, it’s pretty cool. Replication. It’s cool! We can send guys in different directions.. Oh and once again, come on.. North, send this guy north. Send this guy south, and so on. Oh, now actually look at.. see that guy’s kind of messing up, because he’s getting interference right in here. He hasn’t managed to successfully finish the plating because he’s running into interference from the kid that the other one is making.

But actually in this particular case, it looked like the kid out of the way, and now this guy is just behind, and hopefully he will successfully localize and maybe move on to actually making a copy. He may not. There are any number of ways that the replication can fail. In fact, we can induce one deliberately if we want. And again, what happened there? All of the CPlate, all of the stuff associated with replication, disappeared, but the object was left behind. There are other..

We can get into worse situations, where in fact the thing will kill the object entirely.. We’ll give it another shot. And.. But in that case it was a miscarriage but otherwise it worked okay. So what this is, it’s kind of like, you know, it’s like a 3D printer, right, working layer by layer. Except it’s only in 2D, making 1D layers, so it’s kind of a 2D printer. Like that. Which is only one, sort of limited, but very effective, approach to replication. The.. One of the goals of the original replication stuff going all the way back to von Neumann and cellular automata, was to explore this duality between like DNA — they didn’t know what DNA was at the time — but to explore the duality between components of physical systems being interpreted as control execution, things to do, and being then just taken as absolutely passive data.

And in this case, we’re doing reproduction, in essence, by self-inspection, and we’re actually, you know, since we’re able to spread ourselves out as we pass SwapLines through, we can inspect a given line and say OK well I need another guy like that.. oh, did I send that guy to the east? All right, so there’s another one that’s sort of messed up. Now it turns out, this will eventually clean up after itself, but it’ll take a very long time because that’s our absolute last backstop.

In addition to the.. being able to abort a replication, we also have poison.. oops.. oh, okay that made a liar out of me. There we go, poison. That.. so we really wanted to send a guy west, so why don’t we send this guy west, and maybe he’ll run into that poison and he’ll have a sad outcome. This is a lot of fun to play with. This is not — see, there we go — this is not yet in one of the demos; this is brand new. But how does this actually work? We could talk about software engineering; let’s talk about software engineering. The circumferential plate — the CPlate — has a bunch of jobs. These are some of the data members. This is, you know, a little UML. With 71 bits.. So what do we.. We got S2D. S2D is a data type that is an array of two seven bit numbers, like that. So an S2D takes up 14 bits all by itself, which is a lot, in ulam land.

I mean, once you start figuring out this stuff, it’s kind of like sort of like ‘bonsai programming’, making these little teeny things where one bit here is all you need.. Oh, that’s really three bits, couldn’t I do it with two bits? Which is fun, it’s got a sort of purity and cleanliness, sort of like assembly language in some ways. But the size of an S2D, being seven bits, that determines our sort of build plate, in you know 3D printer-land. So we can build — we have a chance — at replicating things up to 128 by 128, more or less. It doesn’t mean that’s all very likely to succeed in any case, but that’s our capability. We have a source, which is really quite nice.. can’t take the time to talk about it now, but every time a CPlate, which just automatically spreads around the object — every time a CPlate creates an offspring, it sets the source of the offspring to point back to it.

So in fact, when we got done, and we had.. where is it.. Here. You know, if we take one of these guys.. I’m just going to start him up a little bit and then stop him. He’s not running. All right. So here’s the CPlate. Like this guy. Oops, I did it again. His source is 15, we have to look at the thing and find out which site number that is. You really want to have the EventWindow picture — I guess I don’t have it here — can we see it here.. oh.. yeah I was trying to figure out how to get the volume be right. If you go to robust.cs.unm.edu the, here, so site 15 is up one and two to the left from me, and that’s the guy who created me, like that.

So if we do this, if we track the source here, the source actually creates.. is a parent pointer in the offspring tree. And so this entire CPlate collective, around this whole thing, is also a N-ary tree, a general tree, that you can trace from the last guy who was born through his parent all the way back to whereever it was that I started this guy. Did I start it.. yeah, I started it there, and we can tell because he’s the only guy who has a source of 0, which is himself. And the algorithms take advantage of the fact that the CPlate has two addressing mechanisms. It has (x,y) coordinates: I am located at (4,4), the guy below me is located a (4,5), and so forth, but also it takes advantage of the tree structure in order to sort of break ties and bias the gossiping algorithms that work. It’s a little detailed, but it’s very nice. On the on hand — and it’s also familiar computer science.

These are trees. Unlike typical trees where we try to make them as bushy as possible, these are very long straggly trees, but they have uses nonetheless and, we are representing an N-ary tree in four bits. And how can we do that? We can do that because know that our parent is in the same event window as us. With four bits we can have enough, in the way that CPlate uses it, to find all of possible locations for our parent.

And if the parent needs to go the other way; if the parent needs to go down to the kid, it can just search that location in its neighborhood, and look for guys who are pointing back at it. So we can navigate in both directions in this tree — again, only one small number of steps in a given event — but over time we can pass information up the tree to the root, we can broadcast information from the root out to the leaves and so on. CPlate is not a standalone class; it inherits from UGridContent, which is in charge of forming the absolute localize.. performing the localization. Finding the zero point and so forth. And C2D is a, these are 16 bit coordinates, so each C2D takes 32 bits, which we really do not want to blow 32 bits — especially because we have two coordinates down here, that would be 64 bits right there, just to remember where I am and how big the space is.

So that’s why we use the little squished-down one and accept that a smaller build plate in order to have room to do other things with our bits. So UGridContent provides mapping operations. It’s not it either, UGridContent inherits from RotateContent. Content which has a two-bit data member that specifies a rotation relative to east is east and north is north. So the reason that we have an east CPlate and a west CPlate and so forth is because those trigger CPlate with a different orientation. When we make a guy heading south, the code is all written as if he’s heading I think, and then the rotation is being applied transparently by RotateContent. And RotateContent, in turn, inherits from Content, and Content inherits from this thing called QID, and that’s where we actually start. And QID is a.. ‘Q’ stands form ‘quark’..

Anything that’s smaller than an atom, well, anything that an element is going to inherit from is a quark. So QID is a quark, so is Content, so is RotateContent, UGridContent, and so forth. They’re kind of like abstract classes. They’re sort of, you know, they can have data members but they’re sort of incomplete until they finally been instantiated as an element. And from elements we can make instances that are called objects. QID is a template, because you can provide information to say everybody has to have a given species ID — and that ends up costing nothing, because it gets compiled into the code, and represented by the type of the atoms, rather than by in the small number of data members.

The number of tag bits we set to five, the number of progress bits.. Another thing that IDs provide, QID provides, is a timeout watchdog, and there it is, yeah. It provides a progress() method so the code down below needs to call progress() every so often or eventually the watchdog will not only destroy the guy who failed to show the watchdog, but it calls this emergencyDeath() method.. Here we’ve got the sample.. so this is what the behave method looks like at the level of QID. It calls super.behave(), which in this case there isn’t any super, so it inherits from UrSelf, the sort of analog to Object in Java, and then all it does is count the watchdog, increment the watchdog, and if it reaches the alarm level, it calls emergencyDeath(), which erases itself, but first, it looks everywhere in the neighborhood and signals emergencyDeath() on everything else.

And that’s in fact how the poison works, and the ReplicationTerminator works. The poison signals emergencyDeath(); the ReplicationTerminator is handled lower down, and that’s why it can actually leave the parent, at least in some cases. So this is a lot of stuff. Even squishing the data members, the coordinates down to 14 bits is not enough for all the things that we need to do, so in fact CPlate has, yeah here, this thing ‘u’, which is not labeled with ‘m’ so it’s not really good programming practice — we’re not being systematic about our data members — is a union. And so all of the, uh-oh. There, so here we go. So there are three phases in the whole process: GROW is, you know, surround, build the CPlate out and perform localization; GROW is when we actually do the copying line by line, and then KID is at the end when we’re done and we’re going to separate it and strip out, dissolve away, the CPlating. And each of those, being a union, gets to have different use of the same bits, like that. And the reason I wanted to bring this all up, that the copying mechanism has to some pretty tricky synchronization.

So here for example. We’ve got. OK, let’s get rid of this guy. We’ve got this green guy and this red guy. And those are really CPlate.. of course I just got rid of it, now I want it back. They are CPlate,; these are all CPlates, and so forth, but what makes this thing special is: He’s the leftmost, highest CPlate that exists, and this guy is the rightmost, southernmost CPlate that exists and so this guy becomes the..

In charge of.. let it run a little bit.. there. So this guy is in charge of issuing the SwapLines to be copied; he is the ‘head commander’. This guy at the back is the ‘tail commander’; he’s in charge of allowing the lines release into the kid. And if you look.. It’s a little hard to see in this one. The line actually squares up. It can get kind of wiggly as it goes through, but once it gets to the tail commander, everybody dresses the line, so that we know everybody is ready to move into the next phase, and everybody waits until the tail commander releases them. And in order for this.. I mean, there might be other ways to write this, but this is the way I was managing to get it to work. And, all right, there it is. Let’s get a south one going just for.. ohh.. every single time.. It’s like Sideshow Bob and the rakes.

So now the head commander and the tail commander are on different axes because the thing’s heading down, and so on. But the head commander does not release the next SwapLine until the tail commander reports that its gone through. And how do they do that? Well if you look at the little dots of color inside the CPlate, they’re shifting between red and green and blue. And actually what they’re doing is, they’re sort of playing a little game of Rock Paper Scissors with each other. Rock is red, Paper is green, and Scissors is blue, and the only one who’s allowed to change to Scissors is the head commander; the only one who’s allowed to change to Rock is the tail commander, and the only one that’s allowed to change to Paper is the root, the original guy where this all started. And they work together to coordinate between it. And again, this is another example of..

Where’s my UML.. this is another example of how we use a limited amount of synchronization, just what we need in order to do it, in order get a coordinated action done at a distance, like that. So we have a RockPaperScissors class whose job is that, and that costs us two bits. And in fact we represent it as a Unary two bit number, because we only need three states, 0, 1, and 2, and that’s all that you can represent with two bits expressed in base 1. All right. Running out of time. Way running out of time here. So. This is a non-trivial, from some points of view, certainly from compared to the Burn demo and the ForkBombs and so on, this is a non-trivial amount of code.

That UML diagram was not complete; that was just some samples of the classes; it didn’t actually have the behavior classes that were involved. There’s a separate class for controlling when we’re doing GROW, and COPY, and KDI, although KID is very short. But it’s a way to be a flexible object replicator that works with pretty high reliability, as long as it isn’t impacted by events happening around in the world. And to some degree, it even handles unexpected events, either by miscarrying, or in the worst case, by actually cleanly killing the object that was trying to replicate. All right.

Looking forward, language development continues; there’s much more that we could want in the language. I really want to get the motion.. I mean this, in a way, a little bit out of order, because this sort of took the motion technology and is applying it to do replication. It would be nice if these guys could just move themselves around without it. Cells control their own motion and replication.

And we really want to start looking forward towards tiles, making a new prototype tiles. We’re probably going to use one of these little tiny system-on-modules that’ll run linux, so that the port of the code that’s currently running in the simulator to the tile won’t be that hard. The tiles’ll be incredibly expensive compared to what they could be if they were optimized for it, but again, they can serve as a line in the sand. And that’s about it. And let’s do one last one, or maybe we can leave sort of leave this one running, and it’s worth noting that we can actually reproduce things other the Block Content, as long as they’re sort of surrounded by Block Content so that when the circumferential plating comes through, it’ll know what it’s supposed to count as inside and outside. So why don’t we send this guy north, here? See if that works. Bigger objects take a lot longer to localize, and there’s just in general more..

It’s a higher risk replication. Actually this guy is going to be heading, you know, this dark area here is in between two tiles where there’s significant time shear, you know, tidal effects due to communication delays between the things, which also can stress the mechanisms. As much as possible I tried to make the mechanisms interlock, so that if something isn’t ready because, for example, time is running slower where it is, other guys will wait. But there are still, in particular, the ending of the growth phase — determining when we have localized and we’re not going to see anymore, and the coordinates have set — involves a certain amount of ballistics and timing. And there can be failures. We’re probably in pretty good shape here because now we’re into the COPY phase, and that’s a little bit more robust. Is this the be-all, end-all of object replication? Absolutely not. It’s kind of brute force. I mean, it would be nice, you know, have the thing grow from the inside, from a seed — a whole different approach. It would be nice if it could handle things that, in a sort of more, you know, sloppy way. Rather than this, you know, literal line-at-a-time ..

And one of the other things that CPlate does — I didn’t get to talk about — is, it does passivization. All the.. everything that was inheriting from Content, the blue Blocks in this case, when they see CPlate around they suppress their normal activity. I mean, because these things are all happening in parallel. Those Res that I stuck inside the ‘O’ there, they would be trying move if they could, they don’t inherit from Content, they don’t know what’s going on. Now in this case, they were in an area that was small enough that CPlate got in there and kind of immobilized them all, sort of like in gel, but in a more general case, if you make a larger object, with this thing, with big gaps, you can have all kinds of difficulties things will be moving around and doing their normal whatever they do, while the SwapLines are moving through and trying to copy them.

And in fact, it’s easy to end up with not-quite-exact duplicates. And, you know, there’s an awful lot of development that has to be done between here and there, but it’s possible that we could end up seeing evolution happening not because it was programmed in deliberately like we do with typical genetic algorithms and alife software models, but just because that’s what happens, because of our best-effort implementation of replication, motion, and so forth, in this world of best-effort computing. I think those in fact aren’t quite.. no they seven Res each, not sure. That’s it for now, Robert I hope this was long enough for you; thanks for watching. .

As found on Youtube

About aatifriaz

Aatif riaz is a professional writer and SEO professional. He loves to write articles about health and technology.

View all posts by aatifriaz →