Industry perspectives: Preventing malicious use of AI
We’ll send you a myFT Daily Digest email rounding up the latest Cyber Security news every morning.
The researcher
Miles Brundage
Policy analyst at OpenAI, a non-profit AI research company
“There is a lot of potential for AI to be misused. AI could analyse social networks and automate the process of creating targeted phishing messages. This won’t happen overnight — natural language processing is not at human levels of performance — but we need to prepare for these sorts of threats over the next five to 10 years. Among other responses, AI developers might want to adopt norms from the cyber security community. For example, a researcher might disclose a machine learning vulnerability to a potentially affected party before releasing a paper on the topic.”
The consultant
Elliot Rose
Head of cyber security at PA Consulting
“In the rush to implement AI, people are not talking enough about how to secure this. That may be starting to change. The big technology companies have started recruiting aggressively in this marketplace in the past six months.
“To increase security, algorithms have to be less of a ‘black box’. We have to be able to spot deviations in how they work. We should also think about checks and balances for AI that processes sensitive or personal data, such as data on sexual orientation, health or children. There may need to be a human in the loop in these cases.”
The technology company
Sridhar Muppidi
IBM Security CTO
“Using AI to power cyber attacks, for example to crack passwords faster, is a very real threat. But there are multiple ways to harden AI and make it more resilient to attacks. You can teach it to ignore spikes, for example, and you can run checks to see if inputs have been tampered with.”
Comments