▶주메뉴 바로가기

▶본문 바로가기

The Korea Herald
검색폼

THE INVESTOR
July 27, 2024

Startups

Google’s AI plan to understand everything about humans can go wrong: expert

  • PUBLISHED :February 14, 2019 - 17:38
  • UPDATED :February 15, 2019 - 11:02
  • 폰트작게
  • 폰트크게
  • facebook
  • sms
  • print

The fast advancement of artificial intelligence technology has been triggering concerns that the world could be colonized by AI-powered robots as described in the “Terminator” movies.

Mostly such doomsday scenarios are being forecast, and often hyped up by the media, but even some renowned figures in the tech sector, like Elon Musk of Tesla, have warned of potential risks of super intelligent robots.


Carlos Art Nevarez, CTO of artificial emotional intelligence firm BPU.



Carlos Art Nevarez, chief technology officer at artificial “emotional” intelligence firm BPU Holdings, said the end of humanity by the super AI is impossible by any stretch of the imagination, but attempts to try to build a central brain to understand every aspect of humans could lead to “wrong,” consequences.

“Even I, as a human with a lot of relationships, have a hard time in understanding one or two humans, let alone trying to understand the entire human race,” Nevarez told The Investor in an interview in Seoul on Feb. 13.

He said companies like Google are creating a central model that tries to understand everything, and what is worrisome is that “they believe that it is actually happening.”

Nevarez has worked in the computer science field for more than three decades. Before joining BPU several years ago, he used to run his own software firm, and served as CTO at different software companies. Among the software firms is Novell where he worked together with Eric Schmidt, former executive chairman of Google’s parent firm Alphabet, in the late 1990s. Back then, Navarez and Schmidt talked together a lot about cognitive computing and pictured future computers.

“We dreamed that a computer needs to change and it cannot just be about logic,” said the BPU CTO, adding that he and Schmidt believed “emotion,” is the next element of a computer.

Emotion is hard not to recognize, he explained, as people can recognize human emotions like anger, sadness or happiness in almost any language, without even speaking.

“If we can get protocols to communicate the way emotions can communicate, there are going to be way more efficient ways to solve problems facing us today,” said the computer science expert, hinting the adding of emotions in AI could help prevent the potential risks of it going out of control.

Nevarez explained that what he and his company BPU call artificial “emotional” intelligence, or AEI, is about self-awareness and sympathy. AEI solutions can have natural conversations and truly interact with humans, compared to the existing AI solutions, like Apple’s Siri and Amazon’s Alexa, which give users data on certain subjects upon request by utilizing analysis and pattern-finding techniques.

AEI, for example, lets users know their health status in real time and can help them find ways to reduce risks instead of just giving numbers on current health conditions.

BPU’s AEI-based virtual nurse assistant can chitchat with patients, which helps doctors to know the condition of patients based on their moods even before in-person examination. It is now being tested at medical centers in the US.

The Seoul-headquartered firm has also utilized its AEI-based polling solution to predict past presidential elections in Korea and the US. It assesses public sentiments by analyzing millions of social media data, including accounts of campaigners, influencers and candidates. Its polling solution boasted better accuracy in predicting the results of the 19th presidential election in Korea in 2017 than other traditional polling mothods deployed by firms like Gallop and CBS Realmeter.

As seen in any technological advancement, like the internet, blockchain or smartphones, there could always be possibility for AI to be misused along the way, but he said there are far more benefits than negatives, and humans may have to interfere with some measures to prevent such side effects.

He also warned of political moves to regulate the AI industry and the technology itself with traditional rules, calling for a new set of rules and standards to be prepared.

“AI and emotional intelligence need to belong in the hands of individuals, not governments or companies, and should be open to everyone,” he added.

By Kim Young-won (wone0102@heraldcorp.com)

EDITOR'S PICKS