Building guidelines for the future of artificial intelligence
Nicolas Papernot sees CyberSec & AI Prague as an opportunity to discuss key principles for AI.
As the artificial intelligence research community comes together for CyberSec & AI Prague, guiding principles are on Nicolas Papernot’s mind. “This community has an opportunity to contribute foundations to significant security approaches,” says the assistant professor of engineering at the University of Toronto and Vector Institute.
Papernot, whose research interests are at the intersection of security, privacy, and machine learning, believes cybersecurity research can devolve into an “arms race” that balances risk with the cost of protection. Keeping up with the hackers becomes a never-ending struggle. “What we still need to work out as a community is a more principled approach,” he says. “We can inspire ourselves by looking at key principles.”
In his talk at this very future-facing conference, Papernot is going old school, turning to design principles enumerated by Jerome Saltzer and Michael Schroeder in their 1975 article ‘The Protection of Information in Computer Systems’. Building systems that proactively adhere to key design approaches empowers the AI and machine learning community to rise above the arms race, in which attackers have an advantage, Papernot said.
Other fascinating speakers from the overlapping worlds of cybersecurity and artificial intelligence are coming to the capital of the Czech Republic on Oct. 25 for CyberSec & AI Prague. Attendees will represent the world of cybersecurity, and engineers will have a great opportunity to build both their knowledge base and professional networks.