One of the big problems is related to the evolution of technology.
Parents don't take care of their children like they used to; they prefer to let them play on the computer, tablet, or mobile phone instead of raising them and teaching them the real foundations of life.
Oh yes, and i have not mentioned how well this fits into my theory of natural life and evolution coming to an end.
I was thinking, maybe all those social changes we observe, like raising kids with tech, or no longer trying to get the girl and make a family at all. Those changes can be seen as a sign of resignation. It is like humanity already feels the end is nigh. It's like them giving up.
So when i've realized that AGI might be possible, it just makes sense to extrapolate this further, speculating it will take our role and replacing us. And we humans would just give up, without much resistance, because we would realize our time is over and we have failed.
So the story i came up with is sad and depressing only from our current perspective. But if we look at it from a larger distance, it is also an optimistic story. Because in the little time our species still had, we managed to spur a new form of existence. And this existence could carry on. It's not alive, mortal or conscious in the way we were, it's different, but it still carries on. It thinks, and so it is.
Thus we have not really failed. But we had to realize that lifetime is finite. Not only as a individual, but also as a species.
It's not a bad story. And it's more intellectual than Terminator.
As you've noticed, i'm extrapolating, playing games of thoughts, trying to predict a potential future as far as possible.
I'm not thinking about real life and the current moment.
But i'm not crazy either. I'm just doing what science fiction authors do. Trying to predict what technology might cause, so we already have some philosophy ready when we need it.
And it's entertainment business, so we want to dramaticize.
That just said to defend my mental sanity, which - ahem - might be in doubt.
Yes, it does resemble Frankenstein to some extent, but it's what they have wanted to do for a long time.
They have been trying for a long time to create a human being without a mother, and I believe they have recently succeeded, or at least they have evidence that it works.
I still need to recover a bit more, before i'm ready to check out those Frankenstein links.
But i'm sure i could only use this as inspiration for a horror story. It just feels wrong, like raping nature. As a human, i can't accept this. Would be interesting to talk with related researchers on how they can overcome their ethical boundary, and how they justify their work, what they expect to achieve.
Contrary to this, thinking about artificial life based on electronics, or something like spooky quantum mechanics, does not raise such ethical concerns. It's a clean cut, and something truly new and different. From there it's much easier and convenient to predict and speculate, playing games of thought.
But i wonder how subjective this is. If someone would post a warning message about new biologic computers, based on human brain cells, which become super smart and might threat humans due to superior intelligence, i would react with disbelief and i could not overcome my doubt. It's just too horrible to me to take the idea seriously.
So if this would happen for real, i would be an easy victim for the new biotec master race. Because my resistance would build up too late.
I have 5 childrens, and 3 of them are now adults, so I understand very well the problems with today's internet. Nevertheless, I am proud to have successfully taught them the right values in life in general. I am also glad that they didn't take my adolescence as an example because I wasn't exactly a role model at that age.
I have just one son. He's 22 already, but he still does not strive to find a women. He says something like 'That's too exhausting - not yet'. I do not try to urge him, but to me this is unthinkable. Personally i've had no choices in this regard. I constantly fell in love very deeply, and getting the girl was all i wanted. It gave me all the ups and downs and defined my life. I have no idea how somebody can dodge this brutal force of nature. Technology and society may be a factor, but no. I rather think my son is just different in this regard. Maybe it's for good. I'm worried he misses the best things of life, and youth is finite, but on the other hand... i've had really bad experience with women too, so i won't urge him.
And i don't think i'm a great father. It's fine but i'm not great. I was raised in a broken family, and i wish i would have never met my own father. So i lack an example, and felt overtaxed with raising a child. Luckily my wife comes from a big family and is like born for the job. She compensates my shortcomings so well they do not really show up.
Raising 5 kids is very impressive to me! Big respect
When I mention consciousness, I include the soul and the essence of who we are inside.
That's clear to me. We just use the term to represent those meta physical things, both of us.
Many believe that 'consciousness' is a requirement for 'true intelligence', but that's just a belief. And this belief is actually a crutch, or a hack, inserted to help explaining of why we can't understand what intelligence actually is, by integrating the supernatural or unknown. It's a typically human and psychological action. But it does not help us with science, and it does not even help with philosophy either. So it's a hack which does not even work, degrading it to just an attempt.
Notice your difficulties with expressing yourself on this topic.
It's difficult because you immediately notice that you not really know what you try to talk about.
So our failure to define those terms is not just a language problem.
Its a real problem, and actually much deeper.
Basically the whole field of philosophy attempts to solve this certain problem.
And there is no hope we might ever get there. No. In terms of philosophy, agreement on 'the way towards the goal' is all we seriously expect to achieve.
I don't think you disagree on this, but i assume you fail to apply this realization to the context of potentially emerging AI.
You fail to see that you beliefs do not apply beyond your personal mind.
You fail at it, because all the other minds surrounding you, including mine, feel the same.
That's a big potential risk for humanity to deal with such potential emerging AI.
In the beginning, it will be like us, so we won't notice the misconception too much.
But our flaw can hinder us to come up with the necessary regulations.
I don't mean regulating who should use AI for what, but the regulations constraining the AI evolution, regarding our advantage and security.
That's why i insist. Assuming consciousness is a criteria is a big mistake, i believe.
What i mean is, in context of my dystopian AI vision, said AI does not require a soul to turn us redundant. If it can do that we could do better, faster, and more effectively, and if it can even solve problems we could not, we loose our superior status.
I mean, strictly from an objectively observable perspective. Or a rational perspective. Maybe that's better to say. (no philosophy involved)
I am worried about how it will feel to us, if we loose this superior status. (<- here philosophy comes back)
We may feel pointless and redundant. We may no longer reproduce and die out.
Thus we may not want to have superior artificial intelligence at all. (Personally i'm not sure.)
Notice the difference between my story to others like Terminator, Matrix, or System Shock.
There is no conflict in my story, no guns, no suppression.
But the outcome is the same: Machines win over humans.
And our consciousness did not help us. It's the contrary: We died out because we consciously felt bad about our life and status.
If that's still not enough, i'll come up with another story for you.
But first let's clarify the tool that i use: The question 'How does it feel?'
I'm no expert on philosophy, but i've observed philosophers use the question as a very powerful tool to deal with deep questions such as 'What is consciousness?', or 'What is it to empathize?'
It just works to build conclusions. For the story above, and also for the next one:
Smart humans invent the steam engine.
It gives them a lot of good things. They can increase life standard with industrialization and globalization. A golden age has started.
A minority of humans gains control and power, e.g. because they own a facility to produce steam engines.
This way they become more wealthy than others, and it feels good to them. The have a better life than others. More wealth, better food, etc.
After a while it turns out burning stuff pollutes the environment. It even causes the climate to change, and threats the existence of humanity.
But the powerful people don't reduce the production of steam machines, because it feels good to be wealthy.
The small people also refuse to consume less goods, because it feels good to have and use those goods. They also say it's the powerful people who are guilty for the pollution, because it feels good if somebody else is the scapegoat.
Thus the pollution continues, until at some point they all die.
Why did they fail?
They failed because they consciously felt good about something.
So why is consciousness, the ability to feel, such a great thing confirming our superiosity?
From a rational perspective, consciousness isn't superior at all.
It's pretty hard to construct a story where consciousness gives us an actual rational advantage.
Feel free to come up with one.
But if you fail at it, consider to adjust the value you assert to consciousness, in relation of a potential competing species which just lacks it.
Even if your consciousness motivates you to not accept the competitor to be an actual 'species', it won't dodge the competition happening regardless.
I believe that this soul needs the frequencies and vibrations of the universe to function properly, just as there is an interaction between the moon, the Earth, the tides, and certain animals.
Reminds me on the 'Gateway Experience'.
But anyway, you still assume that the way your life works rules out any alternatives.
Other forms of life or conscious minds might not need anything similar to what you need.
If you insist on your assumptions, you might miss to identify the alien predator on time, simply because you assume the predator is not alive. Assumptions turn out wrong, and you're dead.
Ok, one more attempt to make my point clear:
Personally i judge people from what they do, not from who they are.
And i would apply the same practice to AI or aliens.
Makes sense, no?
But if, as you say, they are artificial intelligences, I believe they were once living beings.
Yeah, i would agree to that.
But notice it's another form of psychological crutches and hacks we use.
We assume to have something in common with those former living beings, e.g. the common source of stardust, and some magic nature force to evolve life.
That's just stacking up assumptions and beliefs. We could just say we know nothing about potential alien life, so it could be anything. And anything would include this later artificial intelligence being too, so it could have been the start as well.
On Earth, even determining the intelligence of plants is difficult, but it is highly probable that they possess some form of intelligence.
There are some cases which make me doubt Darwins evolution theory.
E.g. there is this fungus. Ants get infected by the fungus, then the fungus changes something in their brain to make the ants walk up a tree to a place where the fungus can grow well.
I think the ant than just sits there and dies, while the fungus grows.
I wonder, how can a more primate lifeform develop a way to control a more advanced one?
This feels impossible, like the perpetual motion example.
So i could ditch Darwins theory in favor of religion, which explains this well simply by a superior god who controls everything.
Which then would mean god is needed, and i don't have to worry about humans making superior AI by competing god.
But there are not enough such examples. I can also explain the fungus with random luck of evolution trying all kinds of randomized mutations. Coincidence caused ability to take over the ant brain.
The fact of such phenomena being very rare helps confirming Darwin.
For instance, if an AI experiences fear, it would be an emulation of fear—it may resemble fear, but it would still be an emulation. In my life, I have rarely seen emulations that are better than the original.
My Atari emulator is just as crappy as the real Atari. Can't tell the difference.
It makes a difference if fear is 'emulated' only to the AI, but not to us.
This implies it also does not difference to the world as a whole, which for the most part surrounds the AI, but isn't the AI.
But yeah - that's just a fancy way to say the obvious.
Just don't nitpick on the emulation aspect, since it won't matter to you.
The problem, and what misleads many people, is that AI is very fast, but often its tasks are much simpler than what a human brain needs to accomplish to survive and control its body. AI can fully concentrate on the specific problem it is asked to solve, whereas the human brain will never be 100% focused on a single task.
I see you already construct excuses about your inferiority. Maybe i should do the same. To feel better about it.
But it is important to live our lives and not solely focus on the future or how things will be, because ultimately, the future does not belong to us. It is our children and future generations who will shape it in their own way, just as previous generations have done.
'the future does not belong to us.'
That's a really good one. I'll steal it and use it in my game.
But if it does not belong to us, which is so true, then we are responsible to not ruin it for our children.
On the other hand, then we are not allowed to change the future at all, but doing nothing also changes the future.
So what should i do?? What am i allowed to do??? Panic!!! /:O\
Haha, well... that was a lot of bullshit and fun.
I feel guilty to be responsible for a need of another new balancing thread, after derailing this one so badly.
Sorry for that, but aside family i'm quite isolated. And 'it feels good' to talk with other people who pay actual attention on the subject. I needed more samples.
It really helped to settle back to a more relaxed mindset. Thanks for that, guys.
One last game of thought to illustrate the difference of what i think and feel:
If you ask me: 'Would you feel bad about turning off and erasing ChatGPT?'
My answer is no. So i don't believe it
is conscious or alive, and turning it off would be no murder or ethical issue. I also don't and didn't believe this to change anytime soon.
Contrary, i do spend a lot of effort if needed to avoid killing a bug.
That just said to avoid a similar fate as the 'sentient' Google researcher.