Q&A with speaker Jan Bim of Czech Technical University
In our first Q&A with one of our expert speakers from CyberSec & AI, we talk to Jan Bim from Prague’s Czech Technical University.
Jan Bim is a PhD researcher at the Czech Technical University (CTU), working in the Department of Computer Science at the Faculty of Electrical Engineering. An expert in AI and evolutionary algorithms, Jan has carried out extensive research across his area of speciality.
Can you tell us a bit about your work and research at CTU?
I am a postdoctoral researcher in the Department of Computer Science at the Faculty of Electrical Engineering. I have always been excited about intelligence in any form and have studied it from many perspectives.
I obtained my master's degree in artificial intelligence and cognitive sciences where I learned about artificial and human intelligence - as well as ‘collective intelligence’ in nature. I followed that with a PhD in computational neuroscience, where I expanded my knowledge of the functioning of the human brain.
From that, I decided to follow my dream of creating artificial intelligence, which is why I am currently at CTU working on artificial neural networks.
What will be the subject of your presentation at CyberSec & AI?
My talk’s topic is the identification of attack vectors from behavioral graphs. It is the result of my collaboration with Petr Kovac of Avast.
Can you tell us a little more about what the talk will address?
We obtain very specific data — namely behavioral graphs — that record extensive information about the run of a computer system. In our current project, we aim to identify attack vectors that led to the appearance of malicious nodes in a graph.
In order to do this, we use graph neural networks (GNNs) for classification of each node in the system, and we apply an ‘explaining tool’ on that classification. This helps us to see the importance of particular nodes and the relationships between them, helping us to explain why a node is classed as malicious and, therefore, understand the attack vector.
We can then examine how malware got into a system. For instance, whether it was downloaded by another program that was in turn installed from a flash drive. Through our process, we would then know that the attack came from a compromised flash drive.
Are you excited by the young talent in academia in the Czech Republic working in AI, machine learning and cybersecurity?
Yes, we have really good people in our universities, especially in math and programming. We’ve also had some winners of the ACM competition. The talent is really there, but a lot of the talent moves abroad to experience new opportunities. It would be fantastic if we could entice more of that talent to return in the future. Especially those that have spent time in major firms such as Google or who have worked in top US organizations. It would really help develop the next generation of local talent.
What are your predictions for the future of cybersecurity and AI?
We are, of course, all still waiting for the first company to create a fully functioning and reliable autonomous car. It’s one of the big milestones that no one has succeeded with yet. But the big question is when will we reach the point when AI will be capable of doing everything like a human, or better.
However, I don’t believe AI will ever have true free will. I think designers and programmers will always have to tell the robot, or system, in some shape or form what to do. So I don’t think we are in danger, as some believe, of AI taking over. However, I am aware that there is a perception in the general public that this may be the case. I recently had trouble at customs in North America when the customs official discovered the area I worked in. He was worried I was one of those ‘robot people’ and I had to convince him of my good intentions and professional reputation.