The great fear of professors for artificial intelligence? Which becomes the head of hell

It has been touted as an existential risk on par with pandemics. But when it comes to artificial intelligence, at least one pioneer doesn’t lose sleep over these concerns.

Professor Michael Wooldridge, who will be giving the Royal Institution’s Christmas lectures this year, said he was most concerned that AI could become the boss in hell, monitoring every employee email, offering ongoing feedback and even potentially deciding who will be fired.

There are some prototypical examples of these tools available today. And I find that very, very disturbing, she said.

Wooldridge, a computer science professor at Oxford University, said he plans to use Britain’s most prestigious public science conferences to demystify artificial intelligence.

This is the year that, for the first time, we had mass-market generic AI tools, by which I mean ChatGPT, Wooldridge said. It’s very easy to get dazzled.

It’s the first time we have an AI that looks like the AI ​​we were promised, the AI ​​we’ve seen in movies, computer games and books, he said.

But tools like ChatGPT are neither magical nor mystical, he stressed.

In the [Christmas] At conferences, when people see how this technology actually works, they’ll be surprised at what’s actually going on there, Wooldridge said. This will equip them much better to enter a world where this is another tool they use, and so they won’t look at it any differently than a pocket calculator or a computer.

He won’t be alone: ​​robots, deepfakes and other protagonists of artificial intelligence research will join him to explore the technology.

Among the highlights, the lessons will include a Turing test, a famous challenge first proposed by Alan Turing. Put simply, if a human enters a typed conversation but cannot figure out whether the responding entity is human or not, then the machine has demonstrated human-like understanding.

While some experts remain adamant that the test has yet to pass, others disagree.

Some of my colleagues think we, in essence, passed the Turing test, Wooldridge said. Somewhere quietly in the last couple of years, technology has gotten to the point where it can produce text that is indistinguishable from text a human would produce.

Wooldridge, however, takes a different view.

I think what that tells us is that the Turing test, simple and beautiful and historically important as it is, isn’t really a great test for AI, he said.

For the professor, an exciting aspect of today’s technology is its potential to experimentally test questions that had previously been relegated to philosophy, including whether machines can become conscious.

We don’t actually understand how human consciousness works at all, Wooldridge said. But, he added, many argue that experiences are important.

For example, while humans can experience the aroma and taste of coffee, large language models like ChatGPT cannot.

They may have read thousands and thousands of descriptions of coffee consumption, the taste of coffee and different brands of coffee, but they have never experienced coffee, Wooldridge said. They have never experienced anything.

Also, if a conversation is interrupted, these systems have no sense of time passing.

But while such factors explain why tools like ChatGPT aren’t considered conscious, machines with such capabilities could still be possible, Wooldridge argues. After all, human beings are just a bunch of atoms.

For that reason alone, I don’t think there is any concrete scientific argument that suggests machines can’t be conscious, he said, adding that while it would probably be different from human consciousness, it could still require some meaningful interaction with the world.

With AI already transforming fields from healthcare to art, the potential of the technology seems huge.

But according to Wooldridge, that also comes with risks.

It can read your social media feed, pick up on your political leanings, and then feed you disinformation stories to try to get you to, for example, change your vote, he said.

Other concerns include that systems like ChatGPT could be giving users bad medical advice. AI systems can also end up regurgitating biases into the data they are trained on. Some fear that there could be unintended consequences from the use of artificial intelligence and that it could develop preferences that are not in line with ours, although Wooldridge argues that the latter is unlikely with current technology.

The key to addressing today’s risks, he argues, is to encourage skepticism, including why ChatGPT makes mistakes, and to ensure transparency and accountability.

But he didn’t sign the statement from the Artificial Intelligence Center for Safety, warning of the dangers of the technology, or a similar letter from the Future of Life Institute, both released this year.

Mitigating AI’s extinction risk should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war, the former said.

The reason I didn’t sign them is that I think they confused some short-term concerns with some very, very speculative long-term concerns, Wooldridge said, noting that while there were some spectacularly stupid things that could be done with the artificial intelligence, and that the risks to humanity should not be underestimated, no one thought credibly, for example, of entrusting it with responsibility for a nuclear arsenal.

If we weren’t giving control of something lethal to AI, then it would be much harder to figure out how [it] it could really pose an existential risk, he said.

Indeed, while Wooldridge welcomes the first global AI security summit this autumn and the creation of a task force in the UK to develop secure and reliable large language models, he is not convinced by the parallels some have drawn between J. Robert Oppenheimer’s concerns about the development of nuclear bombs and those aired by artificial intelligence researchers today.

I lose sleep over the war in Ukraine, I lose sleep over climate change, I lose sleep over the rise of populist politics and so on, he said. I don’t lose sleep over artificial intelligence.

#great #fear #professors #artificial #intelligence #hell
Image Source : www.theguardian.com

Leave a Comment