Milena Marinova: AI is only as dangerous as we are
By Sydney Radclyffe for Box News
By now, it’s clear to most people that Artificial Intelligence (AI) is essential. These days, the media seems to talk about little else, from the realm of automated technology to discussions about education and data security issues. But, it’s also apparent that many people are suspicious, worried, and simply uninformed about what AI is, what it does, and what it could do in the future.
Experts in the tech field (like Elon Musk), have warned of the possible dangers of AI, often using catastrophic language that provokes fear. To get a clearer picture of the situation, we sat down with Milena Marinova, Senior VP in charge of AI Products and Solutions at Pearson, a global company focused on bettering education.
Beware of bias, perception and hyperbole
Milena starts by explaining that biases are often partly shaped by pessimistic attitudes toward AI. Language and perception are critical when it comes to public opinion, and ‘Artificial Intelligence’, as it’s known by most, is usually called ‘machine learning’ within the industry. This difference is significant.
AI has often been shown by Hollywood and the media in an unfavourable or sinister light, leading to widespread negative connotations. In contrast, the friendlier-sounding ‘machine learning’ is a term fewer members of the public would recognise, yet both labels describe systems able to perform tasks requiring human-level intelligence.
Robots won’t be taking over the world – unless we ask them to
The public’s most significant worries about AI stem from fears of a loss of control, imagining an environment where humans put too much trust in intelligent machines and are eventually outsmarted, to be left at the ‘mercy of the robots’. Milena rejects the idea of AI as an out-of-control force separate from humanity. After all, a computer will only do what it’s programmed to by its creator. So, it’s entirely up to humans to create the AI systems we want to see in the world. These are human creations designed to solve human problems and not an imposing foreign force.
AI isn’t as prevalent as some people think
The constant buzz around AI makes it seem as though it’s already in every part of our daily lives, with some taking this as proof that the first step of the ‘robot takeover’ is underway.
According to Milena, the number of practical applications for AI in consumer-facing products is still relatively low. Outside of personalisation in streaming services and virtual assistants like Amazon’s Alexa and Google Home, the average person is unlikely to interact with AI on a day-to-day basis.
AI systems are currently much less developed than many people assume, and fears over the impending arrival of dangerous, autonomous robots are, as of yet, conjecture.
The future is ours – and we have a responsibility to protect it
According to Milena, strong human leadership with an ethical grounding are of great importance as we look to the future of AI. Responsibility for these systems rests entirely with us, and the closer we get to creating artificial near-human intelligence, the higher the need will be for new codes of ethics and management.
As is pointed out in most conversations about AI, no matter how intelligent systems may become, they will always lack human compassion, emotion, and a moral compass. Greater cooperation between the public and private spheres will be of vital importance in the near future. Tech companies and corporations must be transparent in collaborating with governments to create legislation around AI.
Milena believes that a holistic understanding of the potential problems caused by AI, as well as solutions it could provide, is the most crucial step toward a safer future, in which the tools we rely on leave us more time to ‘be human’.