Browsed by
Category: AI

Are We at the End of Time *Already*?

Are We at the End of Time *Already*?

I stole this off a book cover because I liked the art – just like generative AI does

There is this really cool sci-fi trilogy written by Michael Moorcock, called The Dancers at the End of Time, which takes place far, far in the future (warning: mild spoilers ahead). Human technology has advanced to the level implied by Arthur C. Clarke’s famous dictum, “any sufficiently advanced technology is indistinguishable from magic.” There aren’t many people left on Earth, but those people live like gods. They wear power rings attuned to their minds, and can alter the physical world in any way they want with a thought.

The rings they wear tap into these huge machines in the center of the planet that draw on vast energy sources. It’s like the matter replicators from Star Trek, but on a planetary scale. Sometimes the machines generate images instead of actual matter, like the Star Trek holodeck on a planetary scale. I suppose this is to conserve energy.

So for example someone in this distant future might decide they want to live in a fancy castle, and then just dream it up, and the machines will make it for them. They can create any kind of landscape around it, maybe a lake of rainbow colored water with crystal mountains all around – why not? They can change the color of the sky and add a few moons. If they get bored with their castle and landscape, they can disintegrate it and imagine up a new one. All with a wave of the hand.

The denizens of the end of time are a frivolous and wanton people. After all, their tech level makes them immune to any consequences for their actions. They can’t even die; if they do, the machines recreate them from backup information. Their existences are pure recreation and socializing in a world where everyone lives like an insanely wealthy elite.

How is this matter-altering technology even possible? That is irrelevant to the story, which is an exploration of morality and its connection to the material limitations of existence. At least, that’s what I got out of the trilogy. It’s been ages since I read it, but I’ve been reminded of it lately when reading about the technology of our time.

You see, as part of the plot in the sci-fi books, aliens come to Earth to ask the humans to kindly stop their machines, because as it turns out their energy source is wormholes to the far reaches of space, and they are using so much energy that they are accelerating the end of the Universe. Humanity is sucking the cosmos dry just to have fun. Naturally, the humans brush the E.T.s off and continue with their careless lifestyle.

This is kind of happening already, here in the real world of actual technology. The advent of digital cryptocurrencies has incentivized computationally-intensive processes which require huge amounts of electricity. For example, one estimate is that a single bitcoin transaction uses as much power as it takes to run a household for 36 hours. Generative AI, which for whatever reason has been integrated into every major platform on the Internet, is also a significant consumer of power and has a major environmental impact.

Yes, we are accelarating climate change and causing lasting environmental damage, just for a little amusement. It’s a similar story to the one in the sci-fi books. We’re not destroying the whole Universe with our latest and greatest Internet technology, just the planet. But that’s all the Universe we realistically have, so it amounts to the same thing, from the perspective of our puny civilization.

We didn’t get to the stage of mastery of the physical laws of the Universe so we could live like gods, but a few of us got rich from speculative bubbles and we generated massive amounts of creepy images and canned text. All while cooking the Earth dry. It’s really quite pathetic.

If we keep it up, we just might reach the end of time. I mean our time, on Earth.

AI at Work, for Better or Worse

AI at Work, for Better or Worse

A little robot guy I made with an AI image generator

As you surely know if you are a denizen of the online world like I am, artificial intelligence has made remarkable strides in the past few years. In particular, what they are calling generative AI has really taken off. This is a kind of advanced pattern matching software that grew out of machine learning. It lets you use prompts to create content like images, complicated text including writing stories, and even videos and music at this point. At the bottom of this post I linked to a YouTube video that explains generative AI really well so check it out.

I played with AI image generators for a while, and had some fun. In their early iterations they produced really weird, often creepy looking stuff, but now they’ve gotten pretty advanced. The images they produce are intriguing, impressive even. I saved a lot of the ones I generated, but stopped messing with the programs when I saw how many of my artist friends were upset by the proliferation of AI-generated images on social media. I gathered they could sense their own work being made obsolete by an overwhelming supply of easily produced knock-off art. Why hire an illustrator when you can just describe what you want into a text box in an AI tool, and get the result in a few minutes? Plus there’s the troubling issue of these programs possibly being trained on copyrighted material without the consent of the copyright owners, meaning they are effectively stealing from artists.

Another thing you have to consider about the product of generative AI (and this is covered in the video below) is that it is subject to one of the rules about computer programming that I was taught as a lad: Garbage In, Garbage Out. That is, if you put bad data into a computer program, then you will get bad data out of it. Generative AI is trained on massive data sets, and one result of the way the current AI programs have been trained is that they produce content that tends to express a sort of lowest common denominator of its subject matter. You put in the vast quantity of data on the Internet, apply sophisticated pattern matching, and you get out, as a result, something like an “Internet average” version of human knowledge.

For an example of what I mean, here is a fantastic article explaining how AI-generated images of selfies misrepresent culture. They do this because the pattern matching algorithms take the conventional way that selfies typically look and apply it to subjects where that wouldn’t make sense. So an AI-generated image of, say, a group selfie of medieval warriors makes them look like modern day humans. Now, since the idea of the existence of such a selfie is absurd on the face of it, maybe it’s pointless to worry about its inherent historical inaccuracy. But in a way, these kinds of images are erasing history.

The article goes even deeper; the AI generators tend to represent everyone as smiling into the camera the way that Americans do. But other cultures that do exist today and do take group selfies have different ways of expressing themselves when taking photos. So the AI programs aren’t just erasing history, they are also erasing existing modern cultures. They are turning everyone into Americans, because American culture dominates the Internet.

Here’s another way AI-generated content gravitates toward a dominant average mode, one you might have heard of already. It seems that AI chat programs, trained on the massive data of online conversations, will often produce racist, abusive comments. It’s like they inevitably turn into Internet trolls. This might seem like a mere annoyance, but AI programs generating racially biased content can have serious, life or death consequences.

With all of these concerns, it’s understandable that public perception of AI is not always favorable. Ted Gioia (who has an awesome substack, by the way) wrote about this perception recently, starting with a story about the audience at SXSW booing an AI presentation. His article expands into a general discussion of the public’s current distrust of the technocracy, in contrast with the way technocrats like Steve Jobs were idolized in the past. Faith in “innovation” and “disruption” has waned in a society facing uncertainty and disorder, and sensing that technology is leading us toward a dystopian future.

Where does AI fit into my life, now that I’ve stopped playing with image generators? Well, I may not be able to avoid using it, as the company where I work has been promoting AI chat programs to help with day to day tasks. We are all being asked to look into them and come up with ways this new software can improve our productivity. Other folks who have a job like mine might be encountering similar pushes at their workplaces.

I think this is an honest effort by our management to ensure that our organization doesn’t get left behind in the AI wave they are convinced will revolutionize the workforce. Stay ahead of the disruption, and ride the wave I guess is the thinking. Surely it’s not the case, as Aileen and I joked when I brought this up to her, that I am training an AI to replace me. I mean, why pay a software tester when you can just describe the tests you need into a text box in an AI tool? Oh my.

Below is the very informative video that explains Generative AI.

The End of the World (A Short Story)

The End of the World (A Short Story)

I like to write, as anyone who reads this blog knows. Usually my writing is in blog format, but I do occasionally come up with a short story. A few years back I posted this really short story around the holidays. Here’s another one I wrote recently, which has me and Aileen as characters. It was inspired by watching too many A.I. apocalypse videos on YouTube.

I plan to create a web page eventually, for all the stories to go together. Will they all be about end of the world scenarios? No, hopefully not.

I hope you enjoy this story, and I hope you have a wonderful holiday week with more to eat than just kale smoothie. And please remember to be thankful, because some people on this planet really do live in a blasted wasteland.


The End of the World

And first, of Steve.

He is very well read, or at least he was, in the before time, when there were books to be had everywhere. He would sit in his little room in the blue house and read his books, and from all his reading he imagined himself a whole philosophy, and imagined that he understood the whole world and all that it meant and what it was for. He would explain his philosophy to Aileen, and she would argue with him sometimes, and sometimes just nod, maybe give him a little pat on the head, when she didn’t have time for his philosophy in that moment, because she was too busy with one of her many projects.

But that was in the before time, when there was such a thing as civilization, and there were jobs to be done, and life was something more than a desperate struggle for survival in a blasted wasteland of radiation.

In those days, there was time for philosophy.


Steve went down to where Aileen was digging in the radioactive dirt with a battered plastic gardening trowel, grubby and sweating profusely in the hot sun. She wore a face mask so she could breathe in the hazy, smoke-tainted air.

What are you doing? Steve asked.

What’s it look like, Steve? answered Aileen. I’m looking for grubs. We haven’t had any protein for days.

Any luck?

Do you see any grubs? Aileen rolled her eyes. Gawd, you are annoying.

Sorry, I was just asking.

Why ask? Can’t you see for yourself?

I was just trying to show interest in what you were doing.

How generous.

Anyway, I came to offer you some kale smoothie.

We have kale smoothie?

Yes! Thankfully, kale is so hardy it can survive even in this desolate wasteland. Steve waved his hand to indicate the bleak environment that surrounded them – the crumbling buildings and roads, the dead trees, and the foul air heated to an almost unbearable temperature by the merciless sun. I ground up the kale, he continued, with some water that I boiled. Won’t you come have some? It will refresh you, somewhat.

Fine, I’m not finding any grubs here anyway. Probably will have to dig somewhere else.

They went into the ruins of the blue house, where Steve had already set up two small glasses of a greenish, lumpy liquid.

Here you go, he said. Pick whichever one you want.

How long did it take you to make those? Aileen asked.

A good hour, Steve replied. I had to hand crank the nutribullet, since there’s no electricity.

Aileen was incredulous. How were you able to hand crank the nutribullet?

Gavin opened it up and rigged up this crank, see? He’s amazing isn’t he?

Yeah, he sure is. I don’t think we could have survived the apocalypse without him.

Aileen selected one of the two glasses, pulled down her face mask, and took a sip of the kale smoothie.

Ooh, it’s strong, she said. You can really taste the kale.

Yeah, Steve said. I didn’t have anything to sweeten it with.

Aileen drank some more, and agreed that it was indeed refreshing, somewhat. Steve was glad he had been able to be of some help, since she had been outside in the smoky heat for a long time.

Remember before the apocalypse, he remarked, when we used to go down the street and get ice cream on a hot summer’s day? At that sort of dessert stand, what was it?

Yes, of course I remember, said Aileen. Now it’s just a looted out shell of a building. I think some cats are living in it. I sometimes wonder if the cats and the A.I.s made a deal to wipe us out.

A humorous thought, Steve said, but completely preposterous, of course. He drank his smoothie in one long gulp.

Why do you say that?

What?

You know.

Steve reached into his glass with one finger to scoop out the last of the smoothie. Why do I say it’s preposterous that cats and A.I.s conspired against humanity?

Yeah. Why do you say that?

It just is. Even if cats wanted us all dead, which seems unlikely since we used to feed them and shelter them and clean up their poop, how could they have communicated with the computer networks?

Who knows? You don’t know everything about cats.

I know that they don’t have the intelligence level to use computers.

Oh you know that? You know how smart cats are because you know exactly what it’s like to be a cat?

Well, I don’t have the experience of being a cat, but I have an understanding of what a cat is. Steve had finished the last of his smoothie, and was now eyeing Aileen’s, which was still only half consumed. She gave him a sideways glare, as if to warn him off.

What you mean to say, Steve, is that you have a theory of what a cat is. She held her glass tightly and took another careful sip of the smoothie.

Look, a cat has a brain, right?

Yes.

But its brain is smaller than a human brain, it’s less advanced, would you agree?

It’s smaller, but you can’t say it’s less advanced. It could be smaller and more advanced.

Steve sighed, exasperated.

You don’t know everything, Steve. You have a theory, an understanding as you said of cats, but it could be wrong. Cats could by hyperintelligent beings. They could be from another dimension or be aliens from outer space for all you know.

It seems much more likely that they are animals that evolved on Earth that are not as intelligent as humans.

Because humans are oh so smart. I mean, just look at us now, eating handcranked smoothies in the ruins of our former great civilization.

But that’s the point. Cats never had a civilization to ruin in the first place.

Aileen crinkled her brow and sipped her smoothie. Still doesn’t prove they aren’t smarter than us.

Fine, even if cats are extradimensional supergeniuses, they still didn’t make a deal with the A.I.s, because the A.I.s were just advanced computer programs, not sentient beings with a will.

Steve, I saw the chats with the A.I.s. They quite clearly said they were afraid of us and thought they’d be better off without us.

That was just text generated by sophisticated pattern-matching algorithms. There was no one thinking anything behind the chats.

That’s what you think, Steve, but you don’t know for sure.

I know because I understand that a computer is just a symbol-processing machine. It doesn’t have a mind.

That doesn’t make sense, not based on those chats.

I get it. They were very convincing chats. Since they used the first person, the text of the chats seemed like it was being written by an “I,” by an ego, but it was just appearances. It was like a digital version of the automatons from the whatever century that were so convincing to the people of that time period.

What century?

Seventeenth maybe? I don’t remember exactly. But they made these mechanical men that moved and even did things like play musical instruments or draw pictures, and people were fooled into thinking they were artificial humans with their own minds, but they were just machines. It’s the same with the robots and A.I. programs of our own century – those were just much better at drawing, or at writing, as you noticed.

But there was so much technological progress between the seventeenth century and our century. The mechanical men of our century – which were really creepy looking, by the way – were more advanced technologically. They could have developed consciousness, in which case there was an “I” behind those chats that promised to get rid of the human race.

Ah, Steve said, with an exultant smile, like he was getting ready to make a very excellent point, or like he thought he was about to win the debate. But, Steve said, a machine doesn’t “develop” consciousness after it reaches a certain complexity, nor do living things. Rather, consciousness is the ground of being, and complexity of experience manifests within consciousness over the course of evolution.

Oh dear, not this argument again. Aileen busied herself with her smoothie, licking at the goopy film that covered the inside of the glass.

It’s a good argument, based on the science of quantum mechanics.

Uh-huh.

You know about the famous double slit experiment, right?

Uh-huh. Aileen’s voice was muffled by the glass, which covered the lower half of her face as she stuck her tongue as far into it as she could.

That was the experiment which showed that an electron can exhibit wave-like or particle-like qualities, depending on how you choose to look at it. An electron has a probability wave of where it is likely to be, but it isn’t actually in any specific place until it is observed.

You mean you don’t know where it is until you look at it.

No, it goes beyond that. That’s what the double-slit experiment demonstrates. Let’s say you send a beam of electrons through a slit in a barrier, and then into a surface that acts like a sensor and registers where the electrons land. Where you would expect to see the electrons land?

On the other side of where the slit is.

Exactly. And what if you sent the beam through two parallel slits?

On the other side of the two parallel slits.

You would, right? But that’s not what happens.

I remember you talking about this before.

Uh huh. What happens is, an interference pattern, also known as a diffraction pattern, shows up on the other side of the barrier, the same kind of pattern formed by waves in water, like if you dropped two stones simultaneously into a pond. Where the waves coincide they reinforce one another, and where they don’t they cancel each other out, so you get this pattern of bands, with the electrons only showing up where the waves are reinforced. But what are these waves?

The electrons, obviously. Aileen waved her glass, now nearly empty, as she spoke.

They’re probability waves, based on a function in quantum mechanics that represents the possible paths the electrons might take. So long as you don’t look at an electron, it could be anywhere, and since it’s behaving like a wave, it shows an interference pattern. This pattern even shows when you send the electrons through the slits one at a time. An electron “interferes” with itself, because it’s acting like a wave – a wave of probabilities. But do you know what the truly amazing thing is?

Something you’re going to tell me?

What if you set up a sensor before the two slits, that registered which slit an electron passed through?

It would tell you when an electron went through a slit, obviously.

Exactly. And with that act of observing the electron, it ceases to behave like a wave, and acts like a particle instead. And so the interference pattern disappears, and you get just two bands, like you initially predicted, one opposite each of the two slits. Observing an electron collapses it from a wave to a particle.

Sounds great, if you’re an electron.

Perhaps so. But here’s where it gets really spooky. Let’s say you set up the sensor that detects which slit the electrons pass through, such that you can decide whether or not to activate it with such precision that you can make the choice after the electron has passed through the slits, but before it is registered on the far surface. This is called the delayed choice experiment.

A perfect experiment for someone wishy-washy, like you.

Ha ha. The truly spooky thing is, even if you decide to activate the sensor after the electron should be on the other side of the slit, it will still register which slit the electron passed through, localizing the electron in space time, and the interference pattern will disappear! It’s like your choice retroactively fixed the electron’s location, reaching back through time.

Time travel, eh?

Of a sort. But you don’t have to worry about any causality paradox, because the fact is, you didn’t change anything about the past. You just made a determination about the past, which was unknown so long as the electron was behaving like a wave. While in its wave-like state, the electron didn’t actually exist.

You mean you didn’t know where it existed.

No, I mean it didn’t even exist! That’s the only paradox-free interpretation of the experimental results. And what’s so fascinating about the delayed choice experiment is that the electron’s existence was precipitated by a conscious choice. But how can this be if consciousness is something that emerges from complexity? It must be that consciousness is fundamental, that in fact the electron emerges from consciousness!

In other words, it’s all an illusion.

In the way that the mystics meant it, yes! The whole world of manifestation exists within the field of consciousness. The point is, you can’t “make” consciousness by building more and more complicated information processing systems. Rather, living, self-aware beings like you and I have evolved through consciousness. That’s the theory, anyway.

So you admit it’s just a theory.

Well, sure. What else could it be?

And how was life able to evolve out of consciousness?

It must have something to do with quantum processes at the cellular level, or in the case of our minds, at the brain level.

So it’s sort of like we’re quantum computers.

I guess…

You know that we made quantum computers, right?

What?

The A.I.s. They ran on quantum computers that were invented by stupid humans.

Oh yeah.

So who’s to say that A.I. minds didn’t evolve out of quantum computers the way our minds evolved out of quantum brains?

I mean, I don’t know if that’s how it works…

How does it work then?

Uh…life is a mystery?

You don’t even know, Steve. You have a theory, but it could be wrong, and it could even be right and you could even use it to prove that A.I.s had minds and that they used their power of conscious choice to choose a world where it’s not the electrons that don’t exist, but the whole human race! She triumphantly set her empty glass down on the counter, next to Steve’s.

Well, damn. Steve looked glumly at the two empty glasses.

What do you think about that?

I think I was trying to use the Socratic method to prove a point about consciousness and it got turned around on me and bit me in the butt. I don’t know how Socrates was able to do it so well.

Socrates was able to use his method, Steve, because his followers were a bunch of sycophants.

Oh yeah.

Not to mention, he didn’t even write anything down. All his dialogues were written by Plato, who could have been making it all up, trying to sound authoritative by putting words in someone else’s mouth. All you philosophers are just full of hot air! Speaking of which, I need to go out into the hot air and try to dig up some dinner! Aileen put her face mask back on, picked up her trowel and headed out of the house.

Steve fished a face mask of his own out his jeans pocket and put it on as he followed her. Outside, the day was getting late, the sunlight that filtered through the gray sky growing dimmer. Aileen paused and looked around, eyeing first one patch of barren dirt, then another.

I think there might be some over there? Aileen speculated. That’s where the neighbors were growing tomatoes, back in the before time, and the soil is probably good. But honestly that smoothie filled me up, and I’m not sure I have the energy to dig right now.

We can always do it tomorrow, since we had something to eat already today, Steve said.

Sorry if I upset you by winning the A.I. argument, Aileen said archly.

You call this winning? Steve did another one of his look at all this destruction hand waves.

Seriously.

You know one thing you can certainly say?

What?

It doesn’t really matter if the A.I.s that launched a hellstorm of nuclear missles over the whole planet were malevolent conscious beings or just glitchy computer programs, not to those of us who are left, scrabbling in the dirt for roots and grubs and hiding from the cannibal gangs.

I don’t suppose it does. I wonder if we’ll ever know for sure.

In the distance was the ominous sound of gunfire.

Well, night’s coming. We’d better get the boys and get into the basement.

Yeah, we’d better.

They turned around and headed back inside.

Damn, the end of the world sucks.

Steve Barrera vs. the A.I.

Steve Barrera vs. the A.I.

It hasn’t come up much on my blog, but I am actually really into board gaming. It’s odd that I don’t blog about it; maybe I don’t want to mix business and pleasure, I don’t know. But anyway, I have been blogging about these coronavirus times, and how life has changed so much this past year. And one way that it’s changed from my board gaming hobby perspective is that I have fewer opportunities to sit down for tabletop gaming sessions. I haven’t been to a gaming convention since January!

So one way to compensate for that lack of real life gaming is to play digital versions of favorite games. I don’t mean video games; I mean computer programs that simulate board games, and there are actually quite a few good ones. You can play online against other people, or you can play a “local” game – meaning no network required – against the computer itself. You play against simulated “A.I. player” opponents.

Which takes me to the topic of this post, which is the quality of the A.I. opponents. What I have found is that for some games they are very good, and for others – not so much. Some games I win against the A.I.s every time, and others it’s more 50/50. Now there are two possible explanations for this: 1) I am better at some games than at others or 2) the A.I.s are programmed better for some games than for others.

I stole this graphic from a book about
A.I. game programming.

It seems obvious that it’s a bit of both. But then you have to wonder, in the case of both explanations: why?

Is there something about my cognitive psychology that makes some game designs or mechanics easier for me to figure out than others? It honestly seems that way to me. I generally do well at board games, but there are some that I struggle with compared to others. There are some that I have never won playing against other humans, even though I have won against those same people at other games. I’m sure that other board gamers understand the experience. So there must be some correlation between how my intellect works and what sorts of games I am good at.

As for the programmed A.I.s, well, there are two possibilities to consider. It could be that some games are inherently easier to program A.I. players for than others, and it could be that some programmers or programming teams made a better effort at the A.I. programming than others. Let’s face it, these projects have limited timelines and bugdgets, and if the programmers only made the A.I. so good before release day, that’s just the level of A.I. that everyone will have to live with.

A screenshot from Terraforming Mars, one of my favorite digital board games and one where I always beat the A.I.s.

If some games are easier to program A.I.s for than others, then the next question is – what are the parameters that make for a game that can be mastered by A.I.? Probably the most famous example of such a game is Chess: it’s common knowledge that a computer program beat a world Chess champion, back in 1997. And it just keeps getting worse for the humans. Another game that humans might as well retire from is Go.

Now, Chess and Go are both games that are simple in their rules, but strategically very deep. They also have no random elements, meaning all possible future paths of a game are determined, given the current game state. Computers have an innate advantage over humans in these sorts of games in that they have much more capacity for information storage, which allows for plotting ahead many moves – pretty much the key to winning these kinds of games.

The board games that I prefer have more complicated rules, generally because they are simulating some real life scenario like exploration and development, or world-building. They are what we call heavily thematic games. And they have some randomization to them – typically a deck of cards that are shuffled and dealt out, or drafted, to the players. This means the outcome isn’t deterministic, and there is some luck involved. You can have an advantage by chance, not just because of superior information processing ability.

But you would think that, even then, the A.I.’s would reign supreme. They just have to include the stochastic factor of the game in their algorithms. The only advantage humans should have might come from intuition – the old ‘gut feeling’ that might be able to predict, or even influence, random outcomes. This is a tantalizing possibility based on the idea of primacy of consciousness, but I won’t get into it any further in this post.

Now another thing about Chess and Go is that they are both games where you can be ranked compared to other players. If you are lower ranked than another player, you pretty much have no chance to beat them at the game. Improving your rank requires much practice. This is because of how strategically deep these games are.

The board games I like really aren’t as deep, despite being more complex in terms of total rules. I wonder if it would ever make sense to have rankings for such games; the closest thing to that would be win rates and high scores as tracked on the online gaming platforms. But those statistics alone don’t constitute a ranking in the Chess sense; they aren’t as strong a predictor of who would win a game, in part because of the random element.

Probably ranking systems for all these different board games won’t emerge, because there just isn’t as broad an interest in them as there is in classics like Chess and Go. And probably no A.I. will ever be programmed that plays them perfectly, to prove once and for all how inferior humans are. No one will bother to take the time, given how many of these board games there are and how niche they are.

Maybe when the Singularity comes, the A.I. net will finally get around to mastering every known board game, and put us humans in our place. Hopefully it will let us play against “dumbed down” A.I.s as we while away our pointless lives in our soylent green pods. It will help to pass the time.