A UK-based tech expert said he's not sleeping on the recent growth in artificial intelligence, but argued he has concerns that AI might become a boss from hell, overseeing an employee's every step.
Michael Wooldridge is Professor of Computer Science at the University of Oxford and has been a leading expert on AI for at least 30 years. He spoke to The Guardian this month about upcoming talks he will be giving this winter to demystify artificial intelligence and highlighted his concerns about the technology.
He told the outlet he doesn't share the same concerns as some AI experts who warn the powerful systems could one day lead to the demise of mankind. Instead, one of his worries is that the AI will turn into a boss from hell, monitoring employees' emails, providing constant feedback and maybe even deciding which human employees to fire.
“There are some prototypical examples of these tools available today. And I find that very, very worrying,” he told The Guardian.
WHAT IS AI?
AI has already made its claim in a handful of industries, from helping medical leaders diagnose cancer, detecting fraud at financial companies, and even drafting legal briefs citing relevant case law.
“I'm not sleeping because of the Ukraine war, I'm sleeping because of climate change, I'm sleeping because of the rise of populist politics and so on,” he said. “I don't sleep because of artificial intelligence.”
Wooldridge told Fox News Digital in an email that “existential concerns about AI are speculative” and that “there are far more immediate and concrete existential concerns right now.”
“First and foremost is the escalation in Ukraine — it's a very real possibility, which means nuclear war is certainly closer now than it has been at any time in 40 years. So if you want to lose sleep over SOMETHING, I think that's a much more important issue.” ,” he said.
“PEERBOTS” MAY MEAN A FUTURE WHERE HUMAN POLITICIANS ARE NOT A JOB: EXPERTS
WHAT IS CHATGPT?
Wooldridge said the spread of AI and its growth in intelligence pose other risks, such as bias or misinformation.
“It can read your social media feed, sniff out your political leanings, and then feed you disinformation stories to get you to change your vote, for example,” he said.
AI could become the “Terminator” and take over humans according to the Darwinian rules of evolution, a report warns
However, Wooldridge said users should guard against such risks by being skeptical about AI, arguing that companies behind the technology must be transparent to the public.
“I'm not ignoring existential concerns about AI, but to really take them seriously you'd have to see a really plausible scenario for how AI could pose a threat (not just ‘she might be smarter than us'),” he added added to his comment to Fox News Digital.
The Oxford professor will lead a prestigious public science lecture series in Britain this December, the Royal Institution Christmas Lectures, which has covered a variety of scientific subjects since its inception in 1825. He will be looking at explaining artificial intelligence to the public this year, highlighting that the year is 2023. “The first time we had general mass-market AI tools, by which I mean ChatGPT.”
“It's the first time we've had an AI that feels like the AI we've been promised, the AI we've seen in movies, computer games and books,” he said.
ChatGPT, OpenAI's popular chatbot capable of mimicking human conversations, saw its usage explode this year, recording 100 million monthly active users by January, setting a record as the fastest-growing platform.
“In the [Christmas] “When people see how this technology actually works in lectures, they're going to be surprised at what's actually going on there,” Wooldridge said. “That'll make them much better equipped to step into a world where that's another tool they use, and so they won't think of it any differently than a calculator or a computer.”
Regulators should steer clear of AI and forget about Musk-backed pause: economists
Lectures include a Turing test that examines whether AI exhibits human-like intelligence. Humans engage in a written conversation with a chatbot, and if they can't tell whether they're corresponding with a human or a chatbot, it could indicate that the AI is consistent with human-like intelligence, The Guardian reported.
However, Wooldridge countered that the test was not optimal for making such a determination.
“Some of my colleagues think we've basically passed the Turing test,” Wooldridge told The Guardian. “Sometime, very quietly, in the last few years, technology has gotten to the point where it can produce text that is indistinguishable from text that a human would produce.”
“I think it shows us that the Turing test, as simple and beautiful and as historically significant as it is, isn't really a great test for artificial intelligence,” he added.
CLICK HERE TO GET THE FOX NEWS APP
Filming for the Christmas series begins on December 12 before it airs on BBC Four between Christmas and the New Year.
“I want to try to demystify AI so that people using ChatGPT, for example, don't imagine they're speaking to a conscious mind. That's not the case!” Wooldridge told Fox about the forthcoming presentations. “Once you understand how the technology works, you get a much deeper understanding of what it can do. We should think of these tools – impressive as they are – as nothing more than tools. ChatGPT is immensely more sophisticated than a calculator, but it has a lot more to do with a calculator than it does with the human mind.