This week I participated in the “NATO SPS Advanced Training Course on Cyber Defence in the context of Energy Security” event in Kyiv where I gave a lecture about future challenges in the context of moving to the Semantic Web.
Why did I decide to speak about it? Firstly, science is not only about how to fix problems science is also about how to avoid future problems. It means we have to know (or imagine) possible future issues.
Secondly, interdisciplinary nature of modern scientific research proves it – scientists have to cross the borders of their primary discipline for discovering new horizons of human knowledge. Here is an example.
IBM 2016 Cyber Security Intelligence Index says that “Your next attacker is likely to be someone you thought you could trust. Insider threats continue to pose the most significant risk to organizations everywhere” .
There are a lot of publications about human behaviour as an important component of cyber security, for example the paper “Human Behaviour as an aspect of Cyber Security Assurance” . You can also find a vacancy description for a research fellow in the University of the Sunshine Coast in Queensland (Australia) noting that “the successful candidate will work on a range of projects that apply human factors methods to optimise prevention and response efficiency within a cyber security context.” 
From my point of view, we can not talk about the effectiveness of cyber defence in the situation where a cyber security department cannot build correlation between occured incidents and quantitative data about company’s employees, especially in the aspect of reducing the likelihood of insider attacks. In this case, an employee is a central subject of cyber security as a potential insider. The information about an employee has to be presented by the HR department in a methodized form as an input for quantitative algorithms.
What are the information which can be of interest for security officers?
The information has to be very structured. In order to move into the Semantic Web, we have to be focused on the use of taxonomies/ontologies as a guarantee of semantic interoperability. It will create an opportunity for building a system of prescriptive analytics in HR – I wrote about it in my previous article “Semantic interoperability as a basis of Meaningful Analytics in HR” . The structure of needed information has been described in the article “A Magic Tuple of Prescriptive Analytics in Workforce Development” .
In the context of cyber security, taxonomies of HR, taxonomies of cyber threats and taxonomies of cyber incidents are needed for deep analysis and predictions. Correct built taxonomies are metric spaces as it was proven in the work “Model of computations over classifications” . It means we can build correct mathematical models for the purpose of prediction. It opens the possibility to use the methods of machine learning and AI especially for collaborative filtering  – we will be able to find hidden skills/knowledge that is a very important factor against commercial/enterprise espionage. This approach is effective for medicine also  where statistics predict disease development.
We know incredible dynamics in development of the artificial intelligence (AI). What does it mean for the cyber security sphere? I would like to quote: “While artificial intelligence merely refers to any level of intelligence demonstrated by a computer or machine, “artificial general intelligence” describes the ability of a machine to match the cognitive ability of a human being.”  It leads to question in the future: “Who or what is an attacker?”
The information about the nature of an attacker will define the strategy for counter actions. Yes of course, humans will always be at the end of the chain of subjects who participate in an attack until AI will corrupt the Isaac Asimov’s laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 
Is it correct to compare a robot and AI in the context of cyber security? I see that as a yes! But what to do if AI (robot) is just a blind tool in the attack on humans and not being able to identify a threat on humans? And what do the Isaac Asimov’s laws say about a situation if the victim is another AI (robot) that assists people on peaceful issues.
I see we have to find a way for identifying the nature of the attacker. We should realize that Artificial General Intelligence (AGI) will probably, have an ability to mimic or imitate some person. But we understand that speed of studying and thinking will be different for a person and AI. An AI that will study at the same speed like an average person will be inefficient! The other point is that the capacity of knowledge at a certain moment of time of an AI is overwhelming compared to humans. People have a limited capacity to keep and remember all their accumulated knowledge. AI will acquire more knowledge than a person because AI will scale the memory much faster than any person. In any case, we have to find a source of threat because any counteractions against intellectual executors will lead to long cyber battles. Especially, if we talk about an intellectual swam .
HR analytics can be used for:
finding a correlation between knowledge, their proficiency levels and insider incidents;
identifying cyber attacker’s nature (AI or human) will have a significant value in the future.
From the other side, security department has to share statistics about incidents of insiders for prediction of future problems on the stage of recruiting and for planning of targeted education.