IT Security Is a People Problem

What information do your organization’s security tools take as input to decide whether an action is safe? Most security software only takes IT technical information as inputs; firewalls use IP information and malware detection often uses file fingerprinting. I think we know that there are significant shortcomings with just looking at IT data when we assess risk. These inputs fail to address the most significant threat in any security landscape, the people. At Tech Field day 16 we heard from Forcepoint (video here) about how their User and Entity Behaviour Analytics (UEBA) product takes a lot of external data to make decisions about the risk associated with specific actions by a staff member, or other entity. My usual TFD disclaimer applies.

Attacks target people

There are attacks that target infrastructure, poorly configured or secured applications that face the Internet are a starting place for attacking a computer system. By being good at IT security, it is relatively easy to mitigate these attacks, keeping up to date with patching and security configuration goes a long way. The challenge is that more sophisticated attacks that target specific companies and staff at those companies. The combination of public information and social engineering can lead to compromised accounts or guessed passwords. Very sophisticated attacks may also involve gaining physical access to the network to be attacked, a few minutes is long enough to leave behind a specialized tiny computer to use later in the attack.

Privileged accounts

The attacker’s first step is to compromise something in the environment, a user account or a computer. The next aim is to get both closer to the center of the network and more privileged access. A system administrator account can be used to access for more than a regular user, as well as making it easier to cover your tracks. Local administrator accounts and service accounts are both excellent targets for privilege escalation, often these types of accounts are excluded from system monitoring. For a “defence in depth” strategy, we need to keep an eye on all kinds of security accounts and watch for strange behavior. This peculiar behavior might be a user account that logs onto a different computer or accesses data from an unusual network location. An example would be a user account that logs onto an application server and accesses a database server. Usually, the application service account would be used to access the database; a user would only access the application server from their own computer.

Insiders

Perhaps a more significant threat is a staff member who does the wrong thing, by being human. It can be as simple as an abuse of access, such as searching for the personal information of public figures, family members, or someone with whom the person is in a relationship. This information disclosure and privacy breach is a risk for the business that your firewall will not understand. Of course, there is a higher risk from someone with ill intent. Maybe an employee who has been hired by a competitor and tries to take corporate data or secrets with them downloaded contact lists or secret product data. Perhaps someone who had poor performance reviews or was passed over for promotion, now they want revenge or just what they feel they are owed. The more trusted the individual, the higher the risk. The challenge here is that a trusted person is accessing systems they are authorized to use, but their intent is what makes them a threat.

Use non-IT data

If we need to assess people, rather than just technology, how do we do it? The approach that ForcePoint showed is to combine IT and non-IT data and correlate multiple sources to identify the risk of an action. As a server support analyst, it is entirely reasonable for me to log in the company HR server when I have been assigned a helpdesk ticket for that system. If I haven’t been assigned that ticket, then there is little reason for me to log in to that HR server, I should use the HR web app to log my leave request. Similarly, if my ID card was used to enter the London office & I logged onto a PC in that office, then I should not be trying to enter the New York office an hour later or log in to a PC in that office. Information on that HR system could also help make my security system more aware of my state of mind. If HR knows I was turned down for a transfer I requested six months ago, and my performance review last month was not as favorable as the one last year, then maybe I do not feel so positive about the company. The rabbit hole goes deeper, what if the security system also looked at my emails and instant messages? Maybe I have been complaining about my manager or talking to external recruiters? These might be warnings that I am more likely to take unauthorized action with company information. The more information sources the security system has, the better it might know my state of mind and my intent to determine whether my actions are routine, or risky to the business.

Making more people problems

New technologies always bring new issues. As we have seen with the Facebook data collection outrage that is happening now, people do not like being watched and recorded. We have the technology monitor sentiment in people’s communications, should we? In what situations is the intrusion into these very personal channels acceptable? The technology that ForcePoint use was developed for government agencies, should they be applied to commercial settings? I think that disclosure of the surveillance is essential, informed consent is crucial. There is another aspect, what happens when we are aware of surveillance and behave differently so that we appear non-threatening? There is a passage in Neil Stephenson’s book Snowcrash where a federal employee is supposed to read an electronic memo but has no interest in its contents. The employee knows the acceptable behavior in reading the memo and so mimics that action, the surveillance system believes that she has correctly read the memo. If we watch people all the time, will they pretend to confirm and hide their real nature? Will the same happen with intruders in our systems? Will they behave like typical staff to mask their behavior?

It is clear that firewalls and passwords are not enough to secure most large computer systems. Adding behavior analysis that spans a wide variety of data sources can help, provided it does not require manual tuning for every possible eventuality.

© 2018, Alastair. All rights reserved.

About Alastair

I am a professional geek, working in IT Infrastructure. Mostly I help to communicate and educate around the use of current technology and the direction of future technologies.
This entry was posted in General. Bookmark the permalink.