In our previous post about Exposure Assessment, we described how we outline a target’s infrastructure using SATAYO, our Cyber Threat Intelligence (CTI) platform. This means that we collected the identifiers of all the target’s machines, i.e., their host names and IP addresses. Now it’s time to understand which machines could allow an attacker to gain access to their systems, identifying vulnerabilities in the infrastructure. Let’s find out how to do it!
First of all, we need to identify the open ports on the target machines. In fact, knowing which ports are open gives us an initial idea of which services might be hosted on the machine. Furthermore, some ports are more interesting than others. In some cases they may provide direct access to sensitive information or to the command line, meaning we have total control of the machine. An example is the File Transfer Protocol (FTP) port 21 or the Remote Desktop Protocol (RDP) port 3389.
SATAYO collects open ports in two ways. The first involves open source search engines dedicated to providing information about Internet-connected devices. The other requires the use of tools specifically thought to perform port scanning. A port scan consists of trying to connect to all or part of the ports of a machine, waiting for and interpreting any response. The former is more in accordance with OSINT framework guidelines, but the latter may be more comprehensive.
The most common types of ports are the HTTP or HTTPS. These usually expose websites or web applications. When identified as open, they are tested to determine the HTTP methods accepted. If the web application doesn’t correctly managed all the accepted methods, it can cause unintended flows that may lead to the machine being compromised.
Moreover, SATAYO is able to detect the software exposed, along with its version. This is possible thanks to the integration of some tools. They are usually able to identify the technologies used to develop the website or web application hosted.
SATAYO also enriches this data with screenshots of web pages and some other insights about the website based on the response headers. Likewise, it collects previous versions of the web pages stored on the Internet. These can in fact reveal useful information hidden in the latest version of the web page.
The first test is performed on the HTTPS ports to verify the strength of the TLS connection. In particular, expired certificates or the use of weak encryption or key exchange algorithms may lead to the “Man in The Middle” attacks.
Finally, knowing the technologies and the version of the software used to develop a web page makes it possible to search for any associated vulnerabilities (this can be done manually). However, not all the vulnerabilities are exploitable, and they can also create false positives. In fact, the version number doesn’t always change just because you apply a patch to fix a vulnerability.
To conclude, it’s possible to acquire more knowledge about a vulnerability on the NIST National Vulnerability Database. This is useful to enrich the findings with, for example, their CVSS score and vector in order to classify how dangerous the vulnerability is. Another interesting fact is whether an exploit for the vulnerability already exists.
This is where an exposure assessment has to stop. In fact, the “fun” part of testing exploits is part of a different service, the Penetration Test. Nevertheless, our work is far from finished. In our next article we’ll take a look at all the possible kinds of domains that can be correlated with the client one. At worst, they are one of the most critical threats to a business. Do you already know why?
Did you learn from this article? Perhaps you’re already familiar with some of the techniques above? If you find security issues interesting, maybe you could start in a cybersecurity or similar position here at Würth Phoenix.