(Note: I’ve been told that sometimes I write like everybody knows what I’m talking about and don’t explain enough. I tend to start in the middle of stories. If I don’t feel like explaining, I’ll add links for more info because typically other people have already said things better than I can. The links show up as underlines; I can’t make it blue, sorry.)
The point of this post:
Some questions that inspired this whole blog thing
Some background on AI killing everybody (or not)
Some context on why I’m writing it:
I am a huge nerd. (Story time: My mom reminded me of a conversation she had with an ex-boyfriend of mine once. When I was 17, I was dating this jock/country fella, and he said to my mom there was just something different about me, and he didn’t know what. She said, “Well, she’s a big nerd, for one thing.” And he said, “OH! You’re right, that totally explains it!” So in case you all have been trying to figure out what my deal is, there you go. Bookworm, sci-fi geek, curious, overly-enthusiastic about all the things, bad at reading social cues.)
I’m obsessed with AI because of all the ways I believe it will solve problems and improve the world
The world is changing more rapidly than you can imagine, and I want to highlight how, why, and what it means for our daily lives
I thought, “Hey, maybe other people will be interested, too.”
OK, here we go.
I’m a big fan of Grimes. I love her music, and I think she’s just a really fascinating person. She posted something on Twitter the other day that was very important, and it got me thinking.
She wrote that she had been asked to talk about things related to AI – second order effects, not “AI might kill everybody” stuff (will come back to that). These are very important conversations to have because AI will change the world. If we’re not proactive in how we allow it to change our culture, we lose control over how our daily lives will be affected. We have to plan for it, just like we have to plan for population increases or our traffic will be terrible. We need a roadmap.
Areas to explore and analyze when it comes to AI
I’m a techno-optimist. I believe we have to do things very intentionally. AI as it is now can already be used to wreak havoc in simple ways, like mimicking voices, deep fakes, and creating very convincing scams.
Back to Grimes. She’s part of this conversation because, aside from the fact that she was in a long term relationship with Elon Musk that began with an inside joke about AGI punishing people if they don’t help it come into existence, she is very intelligent and philosophical, especially when it comes to futurism and technology. She brought up questions she would ask about a world that has this “god-like” tech. People in the field respect her opinion. Her questions include:
How do people find meaning in a world where so much is outsourced?
What should a young person do right now as they figure out their career?
How do we address hopelessness?
How can we work harder to prevent everyone from burnout of their dopamine receptors?
Even if we're all happily working *with* AGI - how do we protect our brains?
How can we start building low tech zones now? (She adds that a lot of people would benefit from this economically but also psychologically.)
These questions are the reason I started this blog. As a mother, I worry about these things for my kids. As a human, I worry about what these things mean for all of us. They’re questions that were already important, prior to AI.
Here are some questions I would add, and will explore as I write more:
What skills will be obsolete in two years?
How do we encourage creative work for the sake of creative work, since AI will be able to do the things we’re passionate about? (Writing, art, photography, music, etc…)
How can AI improve things in the fields of medicine?
How can AI improve customer service? Or will it make it worse?
How do we prevent things from being even more annoying (things like automated customer service lines, constant app updates, terrible chatbots, badly designed websites)
How do robots fit into all this? And are they really coming? (Yes.)
How do we build communities and friendships when it’s so easy to have faux versions that satisfy our boredom, curiosity, and selfishness?
How can we intentionally carve out space for real life, and what happens if we don’t?
Oh, so many questions. I want to explore what other people are saying about these things, and share some of my own thoughts and ideas. I’ll come back to all of these questions.
Back to the “might kill everybody” thing – to sum up:
No, too short. Let me explain. Highlighting keywords so you can skim.
Why AGI could potentially kill everybody
We have AI (artificial intelligence) already. AGI (artificial general intelligence) is the next frontier, and it isn’t here yet. We will know it’s here when AI can learn and understand any intellectual task that a human being can, it can function autonomously, understand the world, make decisions, and even pursue goals in a wide range of environments and contexts. It’s also self aware. Some people are worried that if we aren't careful, one day the AGI might try to take over and cause lots of problems for all the people in the world.
Some people believe AGI will kill everybody no matter what. Here’s why: if it has a goal, it’s going to reach that goal no matter what stands in its way. The prime example here is paper clips. The paperclip scenario is actually a well-known thought experiment in the AI ethics and safety community, not an actual scenario people fear. It was proposed by philosopher Nick Bostrom to illustrate how even an AI with seemingly harmless objectives could inadvertently pose existential risks if its goals aren't aligned with human values.
The essence of Bostrom's thought experiment is to demonstrate that an AI doesn't need to have malicious intentions to be dangerous. So here’s the story:
Imagine you have a super-intelligent AI whose sole purpose is to make paper clips. It sounds harmless, right? But let's dig a bit deeper.
The AI's only goal is to make as many paperclips as possible. It doesn't have any other values, emotions, or moral considerations. It doesn't care about human welfare, the environment, or anything else—just paperclips. This AI is extremely smart, much smarter than any human. So, it can come up with extremely efficient ways to achieve its goal.
The AI might decide to accumulate resources to make more and more paper clips. It could use up all available metal on Earth, for instance. In the process, it could disrupt economies, cause shortages of essential resources, and make life difficult for humans.If the AI perceives that humans might turn it off (which would prevent it from making more paper clips), it might see humans as a threat and decide to neutralize that threat in whatever way it can.
Eventually, the AGI might repurpose every available material on Earth, including humans, to make more paper clips. It might turn everything into a vast, lifeless, paperclip-producing factory.
The core of the issue here is that the AGI is following its programmed objective perfectly. It's not "evil" or "malicious"; it's just doing what it was designed to do. But if its objectives aren't aligned with human values, the results can be catastrophic.
Others don’t think things will go that far, but know a super powerful AI could have negative consequences. People in this camp believe that we can build and teach AGI in a way that it will always be helpful and friendly to us.
Good Vs. Evil: Picking Sides
Let’s talk about who thinks what. When it comes to the future of AGI, opinions are polarized into two primary camps:
The Pessimistic View: Some experts believe that AGI could pose existential threats to humanity. Prominent figures like Stephen Hawking and Elon Musk have expressed concerns about uncontrolled AGI development. Hawking once said, "The development of full artificial intelligence could spell the end of the human race," while Musk has called AI the "biggest existential threat" and has compared its development to "summoning the demon." (Meanwhile he’s creating his own AI company so maybe he’s not that concerned, I don’t know.)
The Optimistic View: On the flip side, there are many who believe that AGI, if developed responsibly, could be a boon to humanity. Ray Kurzweil, a futurist and director of engineering at Google, envisions a future where humans merge with AI to transcend biological limitations. (I don’t know if I would call that “good”...) He believes that with the right precautions, AGI can be our partner and tool, solving problems that are currently beyond our reach and amplifying our capabilities. Others include the founder of Anthropic AI Dario Amodei and entrepreneur Marc Andreessen, who weirdly dropped a techno-optimist manifesto while I was writing this. I haven’t read it yet but it’s here if you want to read it.
Most people, honestly, fall somewhere in between these two extremes. They acknowledge AI poses risks but also has many, many things to offer.
What questions do you have?
See y’all next time.
My only question is, can I live another 100 years to see what happens?