The human behind the hack: identifying individual hackers
Image credit: DREAMSTIME
Sun Tzu’s counsel to ‘Know thy enemy’ is a staple of cyber-security advisories, yet it’s only recently that security practitioners have been able to flesh out our knowledge of hackers as human entities.
Why do hackers do it? On a superficial level, the probable motivations seem obvious: to steal money if they are criminal hackers, to promote socio-political agendas if they are hacktivists and cyber-espionage if they are working for a nation state. A better understanding of hackers’ human traits has become subject to renewed interest as an aid to defensive security regimens by those engaged to help threatened companies and organisations.
Threat Intelligence (TI)-led cyber security aims to gather, collate and analyse information about sources of offensive action, and identifying the human element of online adversaries is part of this approach. Data gleaned from activities can be combined to build profiles of hackers (and hacker groups) to inform decisions around how cyber-security resources should be deployed most effectively. TI does not replace traditional product-and-policy based cyber-security strategies, such as intruder detection and antivirus software, but it supplements and augments them.
Gaining insight into the ‘hacker as human’ is a factor in TI’s overall aims and practices. This has less to do with psychological understanding aimed at trying to locate, apprehend, and even rehabilitate Black Hat hackers, than an attempt to understand what their innate human characteristics reveal about future intentions.
“TI will help us understand threat actors [the person or entity responsible for a malicious act] and their motivations,” says Ian Glover, president of information security industry body Crest. “It can [also] be used to establish credible scenarios to test security environments. As organisations move to continuous threat monitoring, we will see what is coming over the hill, and take action to ensure we are prepared.”
We have “really just started our understanding of the ‘hacker as human’ within the last five years”, says Jonathan Couch, senior vice president for strategy at ThreatQuotient. Before then, “there was little real understanding of who hackers were, what they were capable of, and how they executed attacks. TI, plus greater information sharing and transparency around incidents, now provides in-depth knowledge of how many of these adversaries operate.” Couch says security organisations now see ‘attack attribution’ as a key element of their work, looking for distinctive patterns to identify perpetrators.
“The more we study threat actors, investigate intrusions and share information, the more the human side of hackers is seen,” says Rebekah Brown, threat intelligence lead at Rapid7. “We analysed a significant amount of information over the past few years on threat actors and actor groups. We understand how nation-state-sponsored actors operate – including how they are tasked, how they approach operations, and how groups associated with the same nation state collaborate.”
To profile hackers, targeted companies must have an adequate data set to identify characteristics of an attack, says Richard Starnes, cyber-security manager at Capgemini, “and to understand who could be targeting the company, and what areas of the business they may be aiming at”. The increase in the number and degree of cyber attacks provides more incidents for TI to base deductions on, and attackers – groups and individuals – continue to be identified by tactics, techniques and procedures (TTPs).
The human nature of hackers is revealed not only by ‘fingerprints’ in an intrusion – a commonly-used password, a preference for a particular type of tool or action, or typos on the command line – but also by nature of the activity itself. Brown says: “When faced with detection, a setback, or a security measure that kicks them out of a system, we continue to see the adversary regroup, retool, and attempt to press on with their objective. These actors do not give up.”
According to Pascal Geenens, Radware’s security evangelist, the human behind the hacker can be basically classified using two parameters: motivation and skill level. Motivations include financial, revenge, political, religious, ideological and ego. Skill levels range from chancers ineptly using hack tools bought off the Dark Net to groups and individuals with advanced coding competence. Both degrees of ability become part of their attribution signature, and help toward threat-level classification. “There is no industry-wide consensus on taxonomy and classification of hackers,” Geenens acknowledges, “but I’m convinced that having a single, global convention would help increase our insight and use intelligence to prepare for, or prevent, attacks.”
Recurrent, cyclical and seasonal shifts in the ‘threat calendar’ provide another dimension to how the humanness of ‘hackerkind’ drives their actions, often without their being aware of it, adds Geenens. “The gaming industry is a more likely target over holidays, and the threat comes mostly through motivations of revenge, ideology and fame,” he says. “During academic exam periods, the education sector faces a higher likelihood of being targeted by its own students – motivated by revenge or by a desire to cause disturbance to delay tests they may be unprepared for.”
The more known about motivations and the human behind the perpetrator, the better their actions can be prepared for and anticipated, says Geenens. “We know the profile of perpetrators, we know how to prepare for and prevent attacks. In the education sector for example, the skill level of the typical perpetrator is low, so we can expect them to skim the Dark Web for hacker services,” he explains. By monitoring ‘dark’ markets and forums, or by impersonating a Black Hat-for-hire, attacks can be prevented, perpetrators foiled.
“By identifying what kind of attacker your website might be a target for, a security team can craft a security policy geared towards that type of threat,” explains Ryan O’Leary, head of WhiteHat Security’s Threat Research Centre. “Hacktivists use distributed denial of service (DDoS) attacks, for instance, which target a site with loads of requests that bring servers down. In this case, the targeted company would need to beef-up servers, and be able to handle and recover from a DDoS attack.” Companies need to safeguard against all eventualities, O’Leary adds, but knowing who is targeting them lets companies prioritise what protection they implement first.
‘Hacker profiling’ assembles outline identities of hackers or hacker groups based on known intelligence combined with some expert conjecture. Often, information that seems of minimal value later becomes part of the TI jigsaw from which outline identities emerge.
“In the last decade, our ability to identify profiles and fingerprints of hackers has grown exponentially. If an attack happens, we can now analyse how the hacker got in, review any code written, any language used or a particular signature,” says Capgemini’s Richard Starnes. “All this leads to generating a much better profile of who is behind it.”
“Organisations use operational intelligence platforms to ensure each step of an attack or malicious activity can be tracked and correlated,” says Matthias Maier, security evangelist at Splunk. “By setting up ‘honeypots’ on the network layer and in business applications and portals, attacker tactics can be followed and analysed to understand what is used, and how they are trying to gain access. It can then be determined if multiple attacks are associated with the same attacker, and then we create their profile.”
Recognising motivations of a hacker can also help build threat ‘maps’ which can help develop a better cyber-security strategy, says Thomas Fischer, threat researcher and security advocate at Digital Guardian: “A politically-motivated hacker might allude to their characteristics through tools and methods they use – for example, by using non-Latin character sets and encodings to avoid detection.
“While knowing a hacktivist might target your company because it is involved in fracking, for example, your strategy may evolve to identify specific threats to your infrastructure – DDoS attacks, website defacement, data leakage – in relation to this.”
Deloitte cyber-risk director Massimo Cotrozzi says understanding a hacker’s profile, with reference to their individual modus operandi, and connections with other lone hackers, collectives, or organised crime, is very useful. “When TTPs [tactics, techniques and procedures] of a hacker or group are identified, they can be used to assign attack attribution. This is complex and generally used to identify the likely group, by matching TTPs and predicting where and when the next target will be.”
As profile patterns establish themselves, organisations can decide whether a targeted part of their infrastructure warrants reinforcement, or is likely to hold in the light of continued attacks because the hacker’s competence is not going to improve.
The more skilled and advanced the hacker, the more unique their signature of attack becomes, says Günter Ollmann, CSO at Vectra Networks. “While it may be difficult to associate the methodology and tooling to an individual, it is possible to track by grouping to an entity. For example, the way malware variables are set, allocated, and flushed in the code, as well as the language compiler: all provide hints to the education and experience of the developer.”
“Snippets of code or comments can reveal the origin of an attacker,” says Rick McElroy, security strategist at Carbon Black, “but there is so much shared and borrowed code that it becomes very hard to say, for example, a certain piece of malware we know was written by the Chinese government wasn’t further modified, and then used by another malicious actor. You have to blend all the available intelligence sources to attribute attacks.”
“Hackers are humans – drawing a picture of a hacker is possible, especially if tracking activity is consistent and all information is correlated continuously over time,” says Cotrozzi. “As humans, hackers also make mistakes, and may reuse assets, tools and methodologies which give away their identity. At this point, the hacker can be profiled from a social perspective. This could, for example, be by analysing the way they write, or how they code and cooperate with others.”
“The most valuable information lies in the infancy of a hacker’s ‘career’, as they’re learning and leaving a trail of mistakes,” says Ollmann. “Later, they will make fewer mistakes, but actions can be tracked back to their historical use of identities and errors. The best attribution intelligence sources lie in past Domain Name System (DNS), web logs, emails and message boards which can be correlated to present-day events.”
The same applies to domain name use, Ollman says: “Even if a hacker registers a domain with false information, some of that WHOIS data may be repeated from past attacks and configurations. Once registered, having access to packets from the system that first configures the nameserver settings for the DNS provides insight. Even more so, the first DNS ‘lookup’ of the domain is almost assuredly by the hacker as they confirm their settings.” These network attributes provide a unique ‘fingerprint’, and often constitute ‘mistakes’ if that data is available for TI investigation, Ollman explains.
Carl Leonard, principal security analyst at Forcepoint, says that “although skillsets can be determined, it’s important to remember that purchase of kits can make hackers appear more or less skilled than they may be. A skilled attacker can use a kit, leading you to assume they have nothing further up their sleeve, and then launch a more sophisticated attack. A less skilled attacker can use a kit leading you to assume they are operating at a certain level – but ultimately this skill has been provided by the kit.”
Hackers sponsored by nation states present different challenges for TI, but also have potential to yield useful intelligence about the nature of the threat. The discovery that a company – even a start-up or SME – is targeted by a major foreign state can prove revelatory, and remind the targeted party that the globalised internet means anybody with valuable data assets will be found. Nationalist identities can often be inferred from tooling and methodology.
“Nation states do typically show consistency in what they develop and how they operate. Different countries have their own signatures based on targeting, tools used, and methodologies employed,” reports Couch. “China has tended to be more open in how it attacks others, and its methodology tends to be fairly consistent. Russia has modified typical cybercrime tools that have come out of Eastern Europe. The US tends to have highly customised capabilities. Iran has leveraged DDoS and social media in its attacks.”
“State-sponsored hackers play in a totally different league,” says Adrian Liviu Arsene, senior e-threat analyst at Bitdefender. “While in recent years we would see one or two state-sponsored attacks a year, such attacks now surface at much higher frequency.” Their mode of operation “often resembles organisational models specific to military or intelligence agencies – they have research teams working alongside ‘DevOps’ [collaborative teams of software developers and IT professionals] and people who specialise in crunching exfiltrated data into actionable intelligence,” Arsene adds.
“TI-led practices will continue to improve over time,” predicts Stephen Gates, chief research intelligence analyst at NSFocus IB. “An understanding of threat actor goals, conditions for exploitation, variations of threat, activities attracting threat, outcomes of a successful threat, vulnerability indicators, and defences against threat – all these are what strategic TI is designed to provide.”
Ollmann believes it will become more feasible to construct a composite picture of a hacker’s probable identity, motivations, skillset and socio-economic background based on intelligence-led monitoring: “Innovations in Big Data analytics and machine learning are making this possible. University research groups have been established to focus on this problem, such as the US Department of Defense’s $17.3m grant to Georgia Institute of Technology to develop a ‘science of cyber attribution’. These automated approaches are ideal for processing network data and binary artefacts, but omit the human intelligence factor needed for prosecution.”
“Future TI-led cyber-security practices will be powered by automated systems based on machine learning and AI algorithms,” agrees Liviu Arsene, “because companies need to quickly identify and respond to threats before they cause damage.”
“While attribution isn’t a priority for all organisations, it has led to insight – sometimes new or just greater insight – into the adversary ‘supply chain’: what threat sources are developing and how, marketplaces where they exchange these capabilities on the Dark Web and vetted forums, and the motivations that drive them,” says Couch. “This awareness and knowledge allows us to exploit the hackers’ human side and predict where attacks may go – and play on their insecurities.”
The closest face-to-face encounter most security professionals will have with a hacker is the malicious insider: employees who have contact with criminals or other threat actors, and supply them with information to access enterprise systems – usually to exfiltrate data for resale – in return for money, other favours, or because of blackmail. This represents a different threat from insiders who use access privileges to steal information. As techniques to investigate data theft become more mature, it gets riskier.
“Plenty of evidence shows that insiders are recruited by hackers to help them get in,” says Jason Steer, solutions architect at Menlo Security. “IT staff hold huge amounts of trust that can be abused – that’s the nature of having keys to the kingdom.”
Technical staff are often “a huge target for attackers” as they have admin accounts and access to systems far beyond that of a standard user, adds James Maude, senior security engineer at Avecto. “Assuming technical users are invulnerable is a huge, but common, mistake.”
Insider threats come in many forms, but with respect to malicious insiders working for their own gain, or as criminal confederates, the threat is rising. 62 per cent of respondents to Watchful Software’s 2015 Insider Threat Spotlight Report said insider threats have become more frequent, with evidence that data-rich vertical sectors, such as healthcare and telecommunications, are most targeted.
“It is entirely possible that an insider could be persuaded to go ‘rogue’, and work on behalf of a malicious external force,” reports social engineering expert Jenny Radcliffe. “Extortion, greed, malice, or various financial factors, can motivate someone to act.”
A 2016 Kaspersky Lab report found that cybercriminals use insiders to gain access to telecoms networks and subscriber data, recruiting disaffected staff through underground channels, or blackmailing employees using compromising information gathered from open sources. These insiders are paid for their cooperation and may also be asked to identify co-workers who could turn rogue. Willing insiders are also found through Dark Net message boards or so-called ‘black recruiters’.
“There have been numerous cases where this has occurred,” believes McElroy. “Money and blackmail can be quite effective as motivators. This isn’t just about cybercriminals: nation states do this all the time. Like any other crime, as the economy does poorly, the rates rise. There are people paying rent from ransomware profits.”
According to Gates, in a deflated economy, “the possibility remains somewhat high that low-wage, disgruntled employees could be conscripted by promises of large sums of money – especially when they suspect their job will be eliminated, outsourced, or relocated.”