Milena Marinova: AI is only as dangerous as we are
By Sydney Radclyffe for Box News
By now it’s clear to most people that Artificial Intelligence is important. These days the media seems to talk about little else, from the realm of automated technology to discussions about education and data security issues. But it’s also apparent that many people are suspicious, worried, and simply uninformed about what AI actually is, what it does, and what it could do in the future. Adding to this, a considerable number of experts in the tech field, such as Elon Musk, have come out to warn the public of the possible dangers of AI, often using catastrophic language that provokes fear. To get a clearer picture of the situation, Box Media’s sat down with Milena Marinova, Senior VP in charge of AI Products and Solutions at Pearson, a global company focused on bettering education.
In the first place, Milena explains that pessimistic attitudes toward AI are partly shaped by bias. She points out the importance of language and perception when it comes to public opinion, and that ‘Artificial Intelligence’, as it’s known by most, is usually called ‘machine learning’ within the industry; while AI has often been shown by Hollywood and the media in a sinister light, leading to negative widespread recognition, the comparatively friendly-sounding ‘machine learning’ is a term that few members of the public would recognise, yet both labels describe systems able to perform tasks requiring human-level intelligence.
The public’s greatest worries about AI stem from fears of a loss of control, imagining an environment where humans put too much trust in intelligent machines and are eventually outsmarted, to be left at the ‘mercy of the robots’. Milena rejects the idea of AI as an out-of-control force separate from humanity, highlighting the essential fact that a computer will only do what it’s programmed to by its creator. This means it’s entirely up to humans to create the AI systems we want to see in the world. At the end of the day, it’s a human creation designed to solve human problems and not an imposing foreign force.
The constant buzz around AI makes it seem as though it’s already in every part of our daily lives, with some taking this as proof that the first step of the ‘robot takeover’ is underway. According to Milena, the actual number of practical applications for AI in consumer-facing products is still relatively low; outside of personalisation in streaming services and virtual assistants like Amazon’s Alexa and Google Home, the average person is unlikely to interact with AI on a day-to-day basis. As such, AI systems are currently much less developed than many people assume, and fears over the impending arrival of dangerous, autonomous robots are, as of yet, conjecture.
The key going forward, according to Milena, will be strong human leadership with an ethical grounding. Responsibility for these systems rests entirely with us, and the closer we get to creating artificial near-human intelligence, the greater the need will be for new codes of ethics and management–as has been pointed out in almost every conversation about AI, no matter how intelligent systems may become, they will always lack human compassion, emotion, and a moral compass. Greater cooperation between the public and private spheres will be of vital importance in the near future; tech companies and corporations must be transparent in collaborating with governments to create legislation around AI. Milena believes that a holistic understanding of the potential problems caused by AI, as well as potential solutions it could provide, is the most crucial step toward a safer, more efficient future, in which the tools we rely on leave us more time to ‘be human’.