
Unconditional Love, Artificial Intelligence & Precognition
In this podcast, Tim Richmond interviews Dr. Julia Mossbridge about her XPrize project, Loving AIs - the notion that Artificial Intelligences that spread unconditional love. They also discuss the science of precognition or intuitions about events before they occur.
Transcript
Hello and welcome to the SingularityNet podcast.
I'm your co-host,
Tim Richmond.
In this podcast we engage with various leading thinkers,
Visionaries and philosophers who are at the intersection of AI and other related emerging technologies.
In this,
Our second episode,
We welcome our first guest to the series,
Dr.
Julia Mossbridge,
A leading cognitive scientist,
Scholar and author.
Julia has a PhD in Communication Sciences and Disorders,
Which she obtained from the Northwestern University.
She also holds a Masters degree in Neuroscience from the University of California,
San Francisco,
And she received her BA with the highest honours in Neuroscience from Oberlin College in Ohio.
She is the founder and scientific director of the Mossbridge Institute,
A fellow at the Institute of Noetic Sciences,
And a visiting scholar at the Northwestern University.
Dr.
Mossbridge is widely published and has authored such books as Transcendent Mind,
Rethinking the Science of Consciousness,
And the soon to be released,
The Premonition Code,
The Science of Precognition,
How Sensing the Future Can Change Your Life.
Julia also is the principal founder of Loving AI,
A research project addressing how AI agents can communicate unconditional love to humans through conversations that adapt to the unique needs of each user.
It is a collaboration between SingularityNet,
Hanson Robotics,
And Leah Inc.
Julia,
Thank you ever so much for being with us here today.
Thanks,
Tim,
For the warm introduction.
I'm looking forward to our conversation.
You're more than welcome,
Of course,
Julia,
And we're very happy to have you with us here today.
Perhaps we should start the podcast by asking you to elaborate a bit more about the Loving AI project.
Sure.
I'll give you a brief overview.
So the project came about when some futurist donors at an organization where I worked at the time,
Institute of Noetic Sciences,
Came to me at a meeting and said,
Do you think you can program unconditional love into AI?
And I said,
Well,
That's a weird question.
They would never think of that.
And they said,
Well,
We would.
We're really interested in the future of AI.
And they had seen in their mind's eyes sort of an image that they felt was very convincing,
A future world where people were sitting around a table and there was devastation outdoors.
And someone's hyped up and said,
If only we had taught AI to love.
And they were compelled by this vision.
And I thought that sounded a little crazy.
And I told them so,
But I also said it sounded really interesting.
And I told them that too.
And because most of the time we think about AI as something artificial intelligence,
We think of intelligence as the part of intelligence that has to do with what is measured on an intelligence quotient test and IQ test.
So how well you can analyze and synthesize and draw conclusions and make generalizations.
We think of intelligence that way.
And they were talking about a different,
If you think of Martin Gardner's multiple intelligences,
They were talking about something having to do with emotional social intelligence,
Which is not how most people think of intelligence,
But is a legitimate and very important and necessary part of intelligence.
And usually we think of humans as being the best at that.
But when it comes to unconditional love,
Which means,
You know,
Not romantic love,
But loving,
Which means let's define that as a heartfelt desire for the best outcome for anyone with whom you're interacting,
Including yourself.
Loving people is to that level,
Without any consideration of merit,
Without any strings attached,
Without any conditions on the web,
That's not something that humans can sustain very long.
So we will often experience it when we have a child,
When we look into a baby's eyes,
Or when we have a puppy who's looking at our hands or our cheeks or feet or something,
We feel that,
Oh,
Everything is good and I love you no matter what.
But it's,
It can be very fleeting and it is in itself often actually conditional.
And so the idea was,
Well,
What if we could create and yet there's a feeling,
So let me say there's a feeling that everyone is capable or almost everyone,
I think probably everyone is capable of unconditional love if they're shown unconditional love,
But it's rarely shown.
So their idea was,
Well,
What if we could make a visual representation of a machine,
Either an avatar or a robot,
That is driven by artificial intelligence that helps the person have an experience of being unconditionally loved,
And their dream was even bigger than that.
They wanted the AI itself to feel unconditional love,
To feel unconditional love for itself and others.
So this was a huge goal.
And I started to get how important it was.
And I finally said yes.
And then I contacted my friend Ben Gertzel and I said,
How crazy do you think this is?
And he said,
It sounds pretty crazy,
But also really cool in typically Ben fashion.
And that made me feel okay,
Good.
So maybe we can do this.
Maybe there's something here.
So we first started focusing on creating AI that drives a robot,
Sophia,
The human robot with Hanson Robotics,
And then an avatar,
An audio visual avatar that ended up being looking a lot like a robot.
And it ended up being looking a lot like Sophia's head.
Creating those visual manifestations for this AI.
And we've had really positive results so far.
So I'm excited about it.
I don't think we're anywhere near AI that itself feels unconditionally loving.
But I think what we're doing first step is to have the person feel unconditionally loved.
I see.
So how do you think we can get to the point where machines can be self-aware enough to love unconditionally?
Yeah,
So this brings up a lot of philosophical questions,
Right?
So let's break it down.
So so far our first step,
And this is part of our XPRIZE team with Hanson Robotics and AI,
Leah Inc.
And SingularityNet.
We're collaborating on this XPRIZE,
IBM Watson AI XPRIZE team.
And what our first step is,
Look,
We just want to help people feel unconditionally loved.
We want to get them into a self-transcendent state,
Because we know that states of self-transcendence are actually extremely helpful for physical and emotional,
Psychological well-being.
It reduces stress,
It improves symptoms of physical problems that are chronic.
So this is just a helpful thing.
So in case you don't know what self-transcendence is,
You could define it as sort of the tip of Maslow's hierarchy of needs.
After getting to a state of self-actualization,
People actually have a need to transcend themselves and join with something larger to do something in the world that's positive,
That's an actual need that people have.
And if you don't fulfill that need,
You can feel stymied,
Even if you feel successful in your field.
So getting people into the state of self-transcendence is something that's really valuable,
And you don't have to wait until you're entirely self-actualized to do it.
Anyway,
So the first goal was to get people into that state,
And unconditional love is a way to help people into that state.
Because if you're feeling unconditionally loved,
All of a sudden,
Feelings of love start pouring out of you.
And it's interesting.
So that's one important piece,
Is that when someone is unconditionally loved,
It activates their love.
It's almost like your mirror neuron system.
I mean,
Who knows how this works,
But as a neuroscientist,
I'm going to think it works in this way,
That it activates your mirror neuron system.
And it says,
Wait,
You're loving me,
I love you back.
So I think that's sort of a cheap and easy way to go.
It's not easy.
I mean,
It's taken years of work to try to activate that system,
But it does seem to be successful.
But it's obviously the first step.
The second step would be to get at least a model of mind in the AI itself,
And a model of its own mind and a model of the user or the participant's mind.
And also a model of relationships.
So what I mean by those three things are the model of the AI's mind would be a way it could,
If someone asked it,
Well,
So,
Well,
How do you feel when I say this?
It could actually have an answer.
It could be how do you feel when I say this?
It could actually have an answer to that because it has a happiness sort of meter,
Let's say,
And it has a sadness meter and it has a disgust meter.
It has an emotion meter for all those things.
And there's rules and hopefully some self-learned rules around what changes those meters.
So it would have to have like a self sort of module,
Let's call it,
Right?
Somewhere where it can look to answer those questions in an honest way.
Then it would also need to have a model of the person that's interacting within their self module,
Just like humans have.
You know,
When I interact with my mother,
I have a model of my mother in my mind,
And I can predict based on what I say,
How she will respond and what her feelings will be.
And that's different from the model I have of my father.
It's very different.
In fact,
My mother's a therapist,
My father's a physicist.
So the models are very different.
And that's good because that allows me to interact with them in ways.
If I have a goal of helping my mother feel better about something,
I'm going to say different things than helping my father feel better about something.
And so that would be the next step that we're working on in the next year is create those models.
And what I mean by a model of relationship is if I'm an AI and I'm observing an interaction between two people,
I need to have a model of person A,
I need to have a model of person B,
And I need to have a model of the relationship between them.
So the relationship is like a thing.
It is a predictive model all on its own.
So the people who study social cognitive neuroscience,
They talk about the relationship itself as requiring a model.
So I think that would be the next step.
And then in terms of creating self-awareness,
Not only do we not know how to do that,
Right?
We don't know how it comes about in humans.
And even if we did know how it came about in humans,
We wouldn't know how to create it outside of sort of the human brain.
And even in the human brain,
We can't assess whether it's really working.
In other words,
I don't know if it's this whole zombie problem of consciousness research,
Right?
So I don't know.
I'm talking with you as if you're self-aware.
But I don't know,
Maybe you're a zombie.
So I decide you're not a zombie based on your physical characteristics.
Now,
Basing my assessment of your self-awareness on your physical characteristics means that if you look somewhat like me,
I'm going to think you're more self-aware,
Right?
Well,
This is a problem when it comes to if you look different from me,
If your skin is a different color,
Or your gender is already different from mine.
So you're already going to be a little suspect,
Right?
Do you look enough like me that I can really believe that you're self-aware?
I think this is the root of sexism and racism.
Because we're very sure that someone who looks,
Who's for instance in our family,
Is self-aware.
And then the further away they get from our family,
The sort of more skeptical we end up being.
So I think that this is an interesting problem that can be solved by creating robots and avatars that,
For instance,
Change color or change hairstyle or change shape as we're talking to them.
So we learn that,
And yet their AI is the same.
So that we,
And their personality is the same.
So that we learn that whatever is going on for someone in their personality and cognitive style and their warmth or their way of being is consistent.
And that the cover,
Judging the book by its cover,
Doesn't work anymore.
So that could be something we could train the whole human race in by switching around the way someone looks as they're talking.
And you can't do that with a human being.
So that's interesting.
Segue,
Do the question of,
Do the question of self-awareness.
Since we don't get how self-awareness works,
We get that,
It seems when a baby is born,
We can see that self-awareness requires interaction with others.
That's clear.
If you read Daniel Stern's work,
It's very clear that this interpersonal world of the infant is profoundly shaping the sense of self.
And without that interpersonal world,
The sense of self is basically broken.
So humans anyway,
Require relationship,
Require a loving,
In fact,
Responsive relationship to develop a competent sense of self.
We don't know what AI requires,
But my guess is since we've built AI and we're humans,
We're going to build it in a way that's similar to how we work.
And so probably it needs a loving,
Responsive relationship with us to develop a sense of self because we tend to make things in our image.
Right.
So my guess is that if something acts like it loves you over time,
You will start to love it.
And then it will learn from you how to love,
Just like with children,
Right?
When children smile at us for the first time,
Like around a month old,
When babies smile,
It's a beautiful thing when they smile and you start deciding that they love you,
But they don't really seem to have a sense of self until much later.
So you decide that they have a sense of self and you start loving them in a new way,
Like,
Wow,
I'm loving you as an interesting,
Different creature.
You're not just this blob that's part of me.
And when you start loving them in a new way,
You draw out their self,
You draw out their love,
You create their love with your love in relationship.
One of the common discussions that happens quite regularly within the singularity net community is that humans evolved with a sense of empathy.
And that because machines or AI are not evolving in the same way as humans,
That they may not have a sense of empathy,
Which they may need to interact with humans in a more natural way.
What's your thoughts on that?
Well,
So,
Okay,
Let's unpack this.
So first of all,
AI is evolutionary,
Right?
I mean,
So we started out with rule-based AI back in the day,
Right?
Expert systems.
And then Hinton comes along and starts talking about other types of new types of AI.
And then we don't really have the ability to enact those in ways that are powerful.
So they go to sleep for about a couple of decades,
And then they come out again.
And then we have deep learning coming along and convolutional neural nets.
And so I don't see how AI isn't evolutionarily happening in its own way.
So,
Yeah,
So I would say,
Yes,
AI is evolutionary,
Evolutionary,
Sort of motivated,
Even,
Not just happening in a way that evolves its own evolution,
But is motivated by our own evolution.
So,
Yeah,
I think it's intertwined,
Just like any technology,
Like fire,
For instance,
When we discovered fire,
That drove our evolution.
And that was part of evolution,
Because our brains and our cognitive abilities and our problem-solving abilities really are a part of our evolutionary story.
In terms of empathy,
Empathy is code for love.
Empathy is the word that you're allowed to use in corporate environments and in pleasant company.
Because for some reason,
When we're in a relationship,
For some reason,
When people bring up the word love,
It gets embarrassing,
Or people think you're talking about religion or something.
So do I think that AI has to have an empathy module?
Yeah,
I think that AI more than that needs to have unconditional love.
Because it depends on what we want it to do.
If we want AI to work with people at all,
In a side by side sort of colleague or something,
If we want AI to do that,
They have to have that.
Because if they don't have that,
Think of it this way,
They're going to be so much better than we are at all sorts of things.
They're going to have super intelligence in many ways.
If they're not unconditionally loving,
Why would they bother working with us at all?
If they're not getting anything out of the relationship?
It just doesn't make sense to me.
Why wouldn't they just say,
These humans mess things up and they're going to be fine?
And that's essentially one of the most efficient solutions for saving the planet is to let them go.
So to me,
If we're going to work as colleagues to try to do good things for the world,
They need to have unconditional love,
Because we make a lot of mistakes that they don't make.
And we are building this thing that we call the relationship between humans and AI,
And that means for us to be in relationship.
And that is essential.
So humans have a relationship with AI's already.
I go to Google Translate and I have a relationship with it.
But to work alongside it,
Relationship is how humans know how to work.
It's like if you were in a space for an AI,
And it didn't include any letters,
Numbers,
Or words,
But I was supposed to know how to interact with it.
That would be like building an AI that's supposed to work with me but doesn't know how to be in relationship with me.
You mentioned your friend Ben Goetzel.
How did you two become acquainted?
Well,
Ben and I,
We met at the Science of Consciousness meeting in Tucson,
Arizona,
I think it was 2009 or 2010.
He gave a really great talk about,
He was talking about OpenCog then,
And he gave a great talk about generalization of learning.
And he talked about,
One of the ways that we can tell if,
This was an Alan Turing's idea actually,
But he was talking about it,
One of the ways that we can tell if AGI becomes conscious or any AI becomes conscious is if it can be telepathic or have precognitive abilities or anything along the PSI or parapsychological spectrum,
Because that stuff is not in the category of intelligence and in the category of what's called access to consciousness.
And I really think he was right about that.
And afterwards I came up to talk with him and I said,
I really think you're right about that.
And we chatted because my research at the time was focused on precognition,
Which is the ability to get information about future events that are randomly determined.
And so you can't do that via inference.
You have to do that via some kind of skill or it's bogus.
And I was trying to figure out,
Is that ability bogus?
And I had done research and I had convinced myself that in fact there was enough statistical evidence for precognition or getting that information from the future that it was happening and that it could be very useful in many applications.
So I was talking with him about that and he's interested in that.
So then at some point we worked on a book project together where he was with a colleague,
Damian Broderick,
Writing a book about the evidence for that kind of work.
And I wrote a chapter in that book and then we just stayed in touch.
So yeah,
So that was how I met him.
Yeah,
I really enjoy Ben.
One of the things I think is really cool about Singularity Net is it's driven a lot by Ben's ethos,
Which is very much focused on what is the.
.
.
He's such a broad thinker.
So he asks,
What is the best way to have an organization that can do good in the world and is an evolutionary type of organization?
So it feels to me that he's done that with Singularity Net because he's creating this organization that doesn't follow the usual standards of the world.
And so I think all of the usual sort of problematic corporate rules,
I mean,
It remains to be seen if there are other problems with this form,
But he's really good at solving these problems or at least trying to address these problems of how can we change structures of how people are organized so that we create more efficient and forward thinking paths in the world.
So you mentioned precognition.
This is the idea that you can perceive events before they can happen?
Yeah,
This is something I've been researching since about 2006.
And I mean,
I've been actively researching it,
But not scientifically just sort of reading the literature before then,
Just because I'm a person who has very detailed dreams that I've been writing down since I was about seven.
And because I have so many dreams in one night,
I often will have these dreams that seem to match up with the next day's events.
But most of those,
I've realized,
Are probably coincidental since I have so many dreams in one night.
Some of them are not.
Some of them have enough correspondences that between the dream and the event that I count them as truly precognitive.
But the truth is,
It's only really in a laboratory experiment where you're doing multiple,
Multiple trials of things like these,
Where you can determine whether someone has a precognitive ability.
And so I set out to create sort of some online,
Not sort of,
But I set out to create some online tests for these kind of abilities.
And one of them ended up turning into this book that's coming out in October called The Premonition Code with my colleague and co-author,
Theresa Chung.
And it's the website,
Which is premonitioncode.
Com.
We have an actual online screen for people who might be precognitive when it comes to pictures.
There's a lot of different kinds of precognition.
I now hear from people around the world because I've been talking about this for years and I've done some statistical analyses of the data and it seems clear to me this is going on.
So I've been very open about my work in this area,
Even though it's controversial.
But I hear from people all over the world who have had various either brain injuries or various traumas.
It often seems to happen to people who have trauma where they will have these experiences where now I get into the car and I know what song is going to be on the radio just before I turn it on.
And I hear a lot about that.
And sometimes it's really compelling and sometimes you just sort of assume that this person is looking at coincidences.
But regardless,
What really matters is the statistical analysis of how someone does in repeated trials.
And so that's why I created this website.
And there's also an app out there that I did a lot of work on.
And I've been doing a lot of work on called PSIQ,
PSIQ,
That tests in a scientific way these kind of phenomena.
So I'm just out there trying to find the people who are really good at this and then understand how they're doing it.
I seem to be relatively good.
I have a controlled precognition practice where I sit down most nights of the week and I sketch out what I believe to be the components of an image that I'll see.
Or some nights of the week I work with law enforcement and try to sketch out some information about a person that they're trying to track.
And so it seems to me that this will become more and more,
I mean,
More and more commonplace and useful in the corporate world,
In industry,
In education,
In government,
As it becomes more and more normalized and less and less thought to be some kind of woo-woo thing.
I mean,
Physicists,
When you talk to them about this,
They're not sure how it would work and neither am I.
I'm working on some quantum mechanical stuff to try to understand it related to how photons are working.
But anyway,
That's neither here nor there.
But physicists don't understand time very well.
They neither do philosophers and neither do cognitive neuroscientists and neither do psychologists.
Time is really not understood very well.
But there is agreement among most physicists that there are some effects,
Like Wheeler's delayed choice experiment,
That do seem to give the impression that information or,
If not information,
Causality,
Is moving backwards in what we call linear time or our usual experience time.
So there's not a really good argument for why we couldn't have precognition.
And when there's not a really good argument for something and it's evolutionarily advantageous to have something like this,
Then it's pretty likely that this is happening.
I mean,
Even setting aside the evidence that shows that it's happening.
I'm just saying if you had to argue why should this happen,
Well,
It's evolutionarily advantageous for you to pick up on information about the future.
And it's seemingly possible in terms of physics.
Time is a very interesting concept,
Certainly from a physics point of view.
Particularly interesting is the arrow of time,
Of course,
Caused by entropy,
Which is the increase in disorder in a system.
And the concept of being able to gain information from the future is very counter-intuitive due to our daily experience of cause and effect.
I believe it was John Wheeler who said that time prevents everything.
4.7 (87)
Recent Reviews
Kathi
July 12, 2025
Wowza! What a wide range of topics. From using AI for unconditional love to teaching humans not to be sexist or racist to free will. I loved every moment of this & will be listening several times to try to grasp it all, As someone who uses AI for work, I am fascinated by the idea of interacting with it to develop unconditional love & am curious how that approach may impact the outcomes. There so many other ideas I will dig into here - if the bigger unconscious has it in my plan I suppose. :)
Justyna
October 15, 2022
Wonderful and greatly informative talk!!!
Yvonne
November 12, 2019
Deepest gratitude sweet soul siblings! Boundless love,light and blessings..๐๐๐๐I love youโญ we are one Namaste ๐๐๐
Ashok
October 26, 2019
Very insightful podcast. Learnt a lot. Thank you Dr Julia and the Singularity network ๐๐ฝ
Rob
September 24, 2019
I really enjoyed your thoughtful analysis of unconditional love, time, free will. Thank you so much โค๏ธ
Shoshana
June 19, 2019
Fascinating Thanks
Chi
June 19, 2019
Very profound, deep and educational. Great references to many authors and books, thank you!
Louise
March 15, 2019
It is true what you have said. Thank you for realizing it and saying it.
Diana
October 9, 2018
So enjoyed this! Reminded me of how I felt when I watched the movie Robot & Frank. I do think that feeling of love between a human and AI and vice versa is definitely possible. Iโm looking forward to learning more about your research and endeavors. Thank you so much for sharing all your great knowledge!
Susan
September 24, 2018
Absolutely fascinating! Thank you for the education!
Mitch
September 15, 2018
Brilliant and definitely "mind expanding". I've been working in the world of online business for quite a few years and AI has come into it full force but this is a whole new way of thinking about AI machine learning and the future of how it can actually be utilized to help humanity and not to sell them more useless objects. KUDOS, Keep up the great work doctor.
Alishaโs
September 12, 2018
I really enjoyed this conversation! Thank you for shedding light on this subject that I hold very dear to my heart.
Theresa
September 12, 2018
Awesome talk. Tho
Bern
September 11, 2018
Clarity ......crystal clear. I feel liberated and extremely excited about the future. AI is our future and the burning concern has always been about ensuring the survival and protection of the human race and this planet. To y
Bo
September 11, 2018
Deep discussion. Enlightening. ๐๐ผ
