It might become a bit awkward for him to be at the helm of a company relying on AI to support its business while simultaneously being involved with an organization trying to scale back AI.
I have a lot of respect for Elon, but the one exception to that is his OpenAI involvement.
After this event you can't expect Elon to stay on the board of OpenAI without further speculations of conflict of interest.
Also I'm sure the OpenAI folks weren't exactly pleased about this either.
The vast majority of the concern should be around things like bugs in AI code, biases that lead to effects like fake news, as well as AI in the hands of bad (human) actors, not the “machines taking over” scenario.
To be clear, OpenAI is doing a lot of great research work, some of it around AI safety. In my view, and I think most domain experts would agree,
The main reason he left I believe is due to conflict of interest done already when he poached Andrej Kaparthy, one of the top scientists at OpenAI, to become head of the self driving unit at Tesla.
But I think Elon has been disingenuous here first, in spreading the “machines taking over” view to get attention for OpenAI, and second, in personally using OpenAI more as a recruiting vehicle for his other companies.
He left, or at least the stated reason he left was to avoid conflict of interest in relation to his roles at Tesla.
This is presumably because Tesla has already been experimenting with AI and autonomous vehicles and is expected to continue further down that path.
I’d like to believe that he realized that this was not an effective approach for him to take and resigned for that reason to focus on the many other great things he is working on.
He is way too smart to genuinely rely on science fiction pseudoscience in making decisions versus the hundreds of domain experts he actually employs.
How do we prevent an AI apocalypse? The Dutch philosopher Spinoza, once said that peace is not an absence of war, it is a virtue, a state of mind, a disposition for benevolence, confidence , justice.
All the great systems we have in our lives exist only because people who had a vision of a more beautiful world willed them into reality through hard work.
We've created layers and layers of these systems. Systems of law of transportation of social interaction of healthcare. And they've been entirely reliant on human intelligence for thousands of years.
In the past few decades though, we've begun the process of creating machine intelligence to improve them. The first phase involved rule based intelligence.
A series of if-then statements Whether using a traffic light instead of a human traffic controller or an automatic fare calculator instead of trusting a driver for a fair price.
We've collectively put our trust into machine intelligence whereas before we had to trust humans. It was scary at first. Initially we didn't understand how the technology worked But eventually it became the norm we found that it made the system better if we trusted the machine to do the job.
We then moved on to using heuristics in the next phase. Our machines started making educated guesses. Whether that be finding the shortest path between two points or predicting the most likely weather for tomorrow.
We again learned to trust them to do those jobs well. Now, we're in the last phase... the learning phase. We've begun teaching our machines how to learn for themselves Eventually our cars will learn to drive themselves.
Chemists will be able to teach their machines to discover new drugs for themselves. Every single system we built up until now will eventually be able to learn for itself. Just like we do. But, should we trust our learning systems?
What happens when we solve intelligence entirely and what does a benevolent super intelligence look like? The famous dutch painter Rembrandt used to roam the streets of Amsterdam finding inspiration in these canal walkways for some of his greatest pieces.
In his studio, he would ask you students to try and emulate his work. Work that took him decades of training to create. We can now condense all that training time into machine intelligence So that a novice can emulate his style Instantly and that goes for any skill.
We'll all be able to master any skill instantly by augmenting our own intelligence with machines. Our world in its current state though is filled with suffering. Life is hard even for the most well-off.
The ultra rich purchased material possessions to account for the loss of what really matters. The loss of connection. The loss of intimacy. The loss of community. AI can change that. It can end suffering as we know it's entirely, enabling utopia on Earth.
Ending all war and disease using insights that only a super intelligent machine could give us in a matter of seconds. Van Gogh once said for my part I know nothing with any certainty, but the sight of the stars makes me dream.
He actualized his dreams using a canvas and soon we'll be able to actualize our own using AI as our canvas. Whether that means generating entirely new realities to live out our wildest fantasies in or creating a universal problem solver who is given any objective function it can optimize for it by minimizing an error value.
But who gets to decide what that objective function is for all the good that AI could bring us, there's also the possibility of it being used for evil.
All in this world renowned for its tolerant attitude and open-minded spirit but just a few decades ago, its own police and civil service forces were used to transport tens of thousands of Jews to extermination camps in Germany.
Including Anne Frank and her family. Adolf Hitler was able to manipulate Hundreds of thousands of people into believing. his own twisted ideology. If Hitler was able to get his hands on super intelligence we'd all be living in a nightmarish reality.
As generative models improve the ability to manipulate people is becoming easier. Moneyed interests have already begun conducting surveillance of our actions and analyzing our data to build weaponized AI.
Propaganda machines, and these machines are used to [manipulate] our opinions and behavior in order to Advance political agendas in a way That's never before been Possible an AI could even come up with an objective of its own, and that objective could [be] to annihilate our species entirely could happen directly because it desires to end human life or Indirectly in the way that we build canals to make transport more efficient for us
In the process destroyed countless ecosystems of insects So how do we ensure that the AI we created beneficial and safe super intelligence is by its very definition Smarter than us able [to] reason at a level so different from us that it's literally Unimaginable we could try and contain it to block it from internet access.
But eventually it will learn to escape and if one person doesn't let them stay there another one will. We shouldn't try to stop progress in AI instead We need to ensure that it's aligned with our own values. We need to understand exactly how these algorithms work.
so they produce the results that we want but who's to say what good values are should we be allowed to pay for sex or Consume any drugs. We want we can't even collectively decide on what good and evil is. Philosophers have been debating what constitutes those for thousands of years and Ai lives on both sides of that [debate] Viruses will use AI to learn how to best break into security systems.
But at the same time security systems will use AI to learn how best to defend against threats. AI will increasingly be used to create fake but realistic looking news. At the same time, AI will also get better at detecting what's real and what's fake?
We're in the midst of a new arms race an AI is the nuclear power of our time as soon as any Institution [viet] government or Corporation learns the power of AI they will race to use it to Fulfill their own Interests, we can learn a lot from our past the European enlightenment, Era writer Lord Acton had words that still ring true today.
He said that freedom consists of the distribution of power and Despotism in its concentration. We must democratize [the] [AI] power We must make it as accessible to as [many] people as we can that way the will of the collective will override anyone by vacuum And only by educating ourselves on how exactly this intelligence works but we be able to better align it with our intended goals whether up to us to [ensure] that our AI Bounces all the things that.
we do that a cherishes laughter and connection that it preserves Friendship and love in our quest to understand intelligence We'll learn more about who we [are] and what it means to be human and together We'll finally be able to take that most sublime process guided by Nature into our own hands evolution
No comments:
if you have any doubt, let me know