Marjan wants to open AI’s “black box”
AI researcher Marjan Alirezaie hopes to make AI systems like deep learning more transparent.
AI methods like deep learning work great for tasks like diagnosing skin cancer. The problem is that we do not know exactly how or why it works. By combining two different AI methods Marjan Alirezaie, AI researcher at Örebro University, hopes to understand what is going on inside AI’s “brain”: “If systems can explain how they arrive at their solutions, then we will be able to trust them”, she says.
The human brain has an analytical and calculative side that make conclusions and find solutions. Within artificial intelligence, AI, there are two methods that to some extent resemble these functions. One is a data-driven method, like machine learning or deep learning, while the other is a knowledge-driven method, based more on logic and symbols.
The problem is that the two methods cannot cooperate like in a human. This is something that Marjan Alirezaie, researcher in computer science at the Centre for Applied Autonomous Sensor Systems (AASS), wants to change.
“By combining these two AI methods we hope to make autonomous systems more robust, both in terms of being able to evaluate various situations and in interacting with their environment and with humans.”
AI systems must generalise and adapt its knowledge
If only one AI method is used at a time, then the field of application is limited. Marjan Alirezaie explains:
“If one only relies on a data-driven method, then it’s necessary to have access to great amounts of data – and good data – all the time. If you have that, then the system is works okay, but if you change the situation or the environment just a little bit, then there are no guarantees that the system will continue to function. Machines can learn, but they have difficulties in generalising what they have learnt.”
Marjan Alirezaie continues: “That’s not the case with humans. We are very good at generalisation and adapting knowledge so that it can be used again in another situation or environment. If we want robust autonomous systems, that aren’t always dependent on humans to keep an eye on them, then the system must be able to learn from data, but also be able to generalise and adapt its knowledge.”
Easier to trust more transparent AI
Another advantage of integrating the two AI methods is that humans may get a better understanding of how AI works.
“Deep learning works very well in specific fields, like diagnosing skin cancer. But we still need a human to be involved, since we don’t know how the system works. We see that a solution is attained, but we don’t see what’s going on inside the ‘black box’. By integrating these two methods, we’re trying to open this ‘black box’ and monitor the processes involved. If systems become more transparent and can explain how they arrive at their solutions, then we will be able to trust them.”
According to Marjan Alirezaie, there are many applications for this type of technology. One example could be in rescue operations, using drones to fly in material or supplies. Since the environment in such situations may often change quite quickly, it requires better performing AI.
“If a fire suddenly breaks out along the drone’s previous mission route, then it needs to make a decision on changing its route.”
At the core: integrating the two AI methods
Since the two functioning AI methods already exist, Marjan Alirezaie has her sights on integrating them. A task that is not easy solved.
“The same holds true for humans. It’s not evident when we learn through ’data’ alone or when we generalise and adapt knowledge based on rules,” she explains, adding:
“We want to create a balance, so that data-driven and knowledge-driven methods can both update and assist one another. If a data-driven method reaches a result that doesn’t satisfy the rules which we’ve stated, then the system needs to understand that it must try again to find a new way to solve the problem.”
Text/photo: Jesper Eriksson
Translation: Jerry Gray