Easy Hack: Hacking secrets of simple things. Easy Hack: Hacking Secrets of Simple Things What protections exist
Solution: Often it is necessary to intercept data. For example, to solve research problems or when carrying out real attacks. Moreover, along with the ability to look, it is also desirable to have the opportunity to change something in the data stream. And if for the most part everything is simple with ordinary protocols, then when wrapping them in SSL (which happens in recent times more often) we get a problem.
Warning!
All information is provided for informational purposes only. Neither the editors nor the author are responsible for any possible harm caused by the materials of this article.
Okay, if we talk about analyzing traffic from the browser, then there are no problems. Any intercepting proxy server (a la Burp or ZAP) will handle this one at a time. But what if the "breakable" client application does not know how to use a proxy? But what if some non-HTTP protocol is used there? We will talk about the solution of the second problem in the next problem (they were separated only for the future convenient search), about the first - read below.
I would like to note that ideologically, the “attack” on SSL will be the same for any of the tools. It is important to understand that no one is trying to decrypt the transmitted data, because the main thing is to bypass the validation of the connection endpoint, which occurs at the first stages of the connection, that is, to make the client think that our service is the place where he wanted to connect. Signatures, certificates and all that. But let's not go deep and get to the point.
There is a client application that works via HTTPS, but it does not know how to use a proxy. We can use the same Burp or any other proxy that is able to work in transparent (invisible) mode.
Let me explain a little what the difference is. Let's start simple - with HTTP, and then move on to HTTPS. For normal HTTP traffic, when a proxy is used, the software (a la browser) sends a request like this:
GET http://any\_host.com/url?query=string HTTP/1.1 Host: any\_host.com
When there is no proxy, the browser sends
GET /url?query=string HTTP/1.1 Host: any\_host.com
As you can see, in the version with a proxy (normal proxy), the request (the one after GET) contains the hostname. When a request is received, the proxy by this name understands where to connect and send the request.
If the application does not support working with a proxy, then we have two problems: how to make it connect to us and how our proxy can figure out where to connect.
To solve the first task, we need any method in which the traffic from the application will flow through us. We can either become a gateway / gateway for a subnet (ARP poisoning to help), or become a server (record in hosts, DNS spoofing, etc.), or use something more perverted.
The second task is solved by the fact that the proxy takes the name of the server where it is necessary to connect, but not from the URL, but from the Host header in the request itself. This proxy behavior is called transparent (although there are other meanings). For example, you can go to the Internet at work without specifying a proxy, but in fact your connections will still go through a corporate proxy, with the same contact restrictions as for proxied users.
But it was about HTTP. But what about HTTPS? Everything is more difficult here.
In the last issue, I described how the browser works through a proxy, connecting to HTTPS sites. Let me remind you that the browser connects to the proxy using the CONNECT method, specifying the hostname where it wants to connect. The proxy, in turn, connects to the specified hostname, and then simply redirects traffic from the browser to the server.
If the application does not support proxies, then how does the transparent proxy know where to connect the client? Under the conditions described, no way. In any case, in automatic mode.
But still, there is an option if you add one more technology here. It is called Server Name Indication (SNI) and is one of the extensions of the SSL protocol. Not yet supported by everyone, but major browsers are already in the boat. The technology is very simple. The client specifies the name of the server where it connects at the very beginning of the SSL handshak (that is, this info is not encrypted).
Thus, the transparent proxy again has the ability to automatically proxy data between the client and servers based on SNI analysis during connections.
Now the general situation is approximately clear. Let's move on to particulars. We will take Burp and its capabilities as a basis, as a typical example.
If the client application does not support proxies, then Burp, as you already understood, can work in invisible proxy mode. You can enable it by reproducing the following chain:
Proxy -> Options -> Proxy Listeners -> Proxy Selection -> Edit -> Request Handling -> Support Invisible Proxy.
If the client supports SNI, then everything is fine: Burp can connect to the desired host and generate a certificate either self-signed or signed by Burp's CA. But if this is not the case, then you will have to work with your hands.
First, on the Request Handling tab, we must specify the address and port where we need to connect. Secondly, in Certificate specify the name of the server for which the certificate will be generated. You can't create a self-signed one at all.
Of course, this data must be obtained from somewhere else. You will most likely need to see what IP address and port the client application is connecting to, and then connect to it directly and take the name from the received SSL certificate to generate your own.
As you can see, everything is simple, but sometimes dreary. But in the end, we get pure HTTP traffic in Burp. For details on this Burp feature, see here.
Intercept non-HTTP traffic in SSL
Solution: As you probably noticed, the transparent proxy for SSL traffic described above is, in fact, a simple port forwarding. It turns out such a redirect with the substitution of the certificate. And in fact, in this case there is no special binding to the protocol that is inside SSL. That is, the previous task is to some extent a subspecies of this one.
Our question from the last topic is what to do if a non-HTTP protocol is used inside SSL? IMAPS, FTPS and almost any other similar protocol with an S on the end. Everything is stuffed into SSL. But how sweet it would be to bypass SSL and get to pure traffic ...
The answer is simple. In other cases, don't use Burp :), but use something else. There are many different tools that will help with this. Some of them are tailored for a specific protocol, but there are also universal ones. I would like to introduce you to one of the generalists today - with SSLsplit.
This tool is beautiful in that its main idea is very simple and clear, but at the same time there is a decent amount of specific settings. It is a simple port-forwarder (“if the traffic came to such and such a port, forward everything to such and such a place”), but it has the ability to “work” with SSL. You can slip a real (stolen) certificate, create a self-signed or signed by your CA. Also supports auto generation based on SNI or redirecting traffic to the final IP address (when we pretend to be a gateway). Plus, it is console-based, which makes it easy to automate typical actions. All this in general makes it much more usable for carrying out attacks (and not just analyzing application protocols). What to do next with traffic - it already depends on your needs.
I will not give here manuals for using, but I will limit myself to a small demonstrative example.
Sslsplit -k ca.key -c ca.crt -l connect.log -L /tmp ssl 0.0.0.0 993 www.example.org 993 tcp 0.0.0.0 143
Here -k and -c indicate the path to the private key and certificate of our CA, which can be generated if necessary using openssl. Other options:
- -l - path to the file where the connection log will be kept;
- -L - path to the directory where the logs of all connections will be saved (in the plain text);
Next, the block "ssl 0.0.0.0 993 www.example.org 993". Ssl indicates that we are sniffing SSL and we need to replace the certificate. Next is the interface and port on which SSLsplit will listen for traffic. The last pair is the domain name and port where SSLsplit should connect to.
The "tcp 0.0.0.0 143" block is almost the same. But here we specify that ssl is not used (so tcp) as well as the SSLsplit input port. If SSLsplit is "connected" as a gate (gateway), then you can not specify the connection endpoint, as it will be taken from the IP headers.
Thus, we have a simple and universal tool. You can read a description of its use with examples, and a list of all the features (man).
Set up Nmap for complex conditions
Solution: Nmap is undoubtedly one of the top ten most needed and used tools - both for pentests and system monitoring, and for other things. On the one hand, she is simple, smart and fast, for which everyone loves her. On the other hand, when working with it in non-standard conditions, various kinds of difficulties arise, largely due to the fact that it is not entirely clear how it works inside, its algorithms.
And the internals of Nmap are not at all simple, even if you take its main functionality - port scanning. Without going into details, it can be clearly stated that the goal of the creators of Nmap was to make a scanner that would extremely accurately find open ports (that is, without false positive, false negative), could work in various networks (stable / unstable, fast and slow) , adapt to changing network characteristics (for example, when an intermediate network hardware can no longer cope with the load). That is, the behavior of Nmap changes dynamically, for a specific network, for a specific host.
On the other hand, its focus on precision is often very detrimental to performance. I will give a typical example: scanning hosts on the Internet, some of which are behind a firewall. On a SYN request on closed port the firewall does not answer anything, instead of RST. It would seem, what is the problem? Just in the cleverness of Nmap. Since it does not know the causes of the problem (firewall, buggy network, end host restrictions), it will slow down the frequency of sending requests to the server (up to one second by default), increase the wait until the response to the sent request (up to ten seconds). And add here that Nmap rechecks ports ten times. It turns out that in order to scan all the TCP ports of a firewalled host with one open port, the score goes even not to hours, but to days. But to find something given open port necessary.
But that's not the biggest problem. Nmap, in order to improve its performance, scans the network in groups of hosts, not one at a time. At the same time, the network characteristics for each of the hosts will be monitored individually, which is also good. The problem is that a new host group does not start until the previous one has ended. It turns out that one host can slow down the scanning of a whole range of hosts ... And to prevent this from happening, it is advisable to finally figure out the possibilities of fine-tuning Nmap.
I confess that I myself am not a deep connoisseur of Nmap algorithms (after all, even books write about this: Nmap Reference Guide , Nmap Network Scanning), which is necessary for correct timing settings. On the other hand, we have an Easy Hack here, so let's concentrate on the basic parameters and practical advice(which in most cases work fine).
So, Nmap has a number of parameters that directly affect performance. I won't list them all, just the ones I liked :)
Before I begin, I will make two remarks. Here, before and after, "default" means "for profile T3", which is actually used by default when no other profile is selected. And the value is given in seconds, but you can use other units of time by adding the appropriate letter. For example, 3000ms, 5h, 10s.
--initial-rtt-timeout , --max-rtt-timeout . RTT (Round Trip Time) - the time from sending a packet to receiving a response to it. A very important parameter that is constantly calculated by Nmap during scanning. And not in vain, because Nmap understands the “performance” of the network and the final host from it. In fact, it directly affects how long Nmap will wait between sending a request and receiving a response.
And as you understand, if we do not receive a response, then this significantly affects the RTT. As a consequence, Nmap, when sending a SYN packet into the firewall's "black hole", will eventually wait for max-rtt (even on a fast network) before realizing that the port is filtered. By default initial is one second, max is ten seconds.
To select the correct RTT values, run Nmap with the -traceroute parameter (the ping command is also suitable). In fact, we need to get the packet transit time to the final host, or at least “close” to it. Further, it is recommended to specify double the received value for initial, and quadruple for max.
--max-retries- a very simple parameter - the maximum number of repetitions. Although I already wrote that ten repetitions can be, but this is only if Nmap senses a “bad” network. Usually less. So if the network is good, you can cut it in half or even more.
--max-scan-delay- maximum waiting time between sending packets. The default is seconds. If the network is good and the load is not afraid (that is, almost always), then you can easily reduce it even five times.
--host-timeout . The last parameter relates indirectly to the timing setting, but is very, very useful. It allows you to set a value after which the host is "skipped" in the scan.
Imagine, we are scanning a class C network and the scanning is going quickly. But then bam! Two or three hosts somewhere in the middle are heavily filtered. If you didn’t set everything up in advance (as indicated above), either wait “indefinitely”, or rescan everything again, but with the settings.
So, if Nmap reaches the host-timeout for a particular host, then it stops scanning and moves on to the next hosts. By setting the value of minutes to twenty or thirty, you can no longer be afraid of "firewalled" hosts, when they reach the limits, they will slip through. And it will be possible to deal with them later by setting up scanning manually. And yes, it is infinite by default.
Of course, there are other options for tuning Nmap's performance, so if these did not help you, then refer here. But the main idea, rather, is this: it is better to spend a little more time on the initial setup than to wait or rescan later.
Install any application on Android
Solution: The most excellent logical bug in Android was recently fixed. And it seems to me that this is a good, typical representative of logical vulnerabilities, and therefore I would like to introduce you to it. After all, such bugs are just a buzz: you need to move your brain, and you can find it where already "everyone put his quote."
So, the essence of the problem was that any android application with certain, not very large privileges, had the opportunity to install ANY other application from the market, with any privileges. That is, a person uploaded a toy to himself, and she took and put more software on the device and leaked all the money through some kind of SMS service thread. Here we see a direct profit for the evil guys, and therefore Google paid two thousand dollars to its author.
Let's take a look at the bug. I won't go into technical depths, but will focus on the basics so that those who are unfamiliar with droids will also feel all the coolness. In fact, it is quite simple! Here everything is according to the principle: individually, each of the "fragments of the mosaic" is protected, but in general - a hole.
The mosaic consists of the following:
- In our application, we can add a WebView, that is, a browser. At the same time, we can push our JavaScript to any page.
- To install the application from the market, we only need a browser. For example, you can go to google play and, if authenticated, install any application on any of your connected devices.
- With some rights (android.permission.USE_CREDENTIALS), the application can ask the Account Manager of the device for an authentication token and automatically log in to WebView in your account.
Do you feel? Our application can automatically log into Google and, with the help of controlled JS in WebView, completely emulate user behavior! All kinds of anti-CSRF tokens, consent requests for high rights, and so on, we can take out and “click”.
Although, in fact, we can access other Google services in the same way. Mail, for example, to read :).
Since the bug was disabled, it means that it was closed. According to the author, the main change was the abolition of automatic login. Now a message is displayed asking if we really want to log in, and we can’t do without the user’s click “OK”.
That's all. For details, I send to, and I recommend filing a personal PoC.
In the end, I would just like to add about the “depth of penetration” of Google into our lives. I'm not talking about the privacy issue. For many people, a Google account is the main one, to which almost everything is linked. Moreover, this is not just a set of critical services, but access to Chrome browser(and then to the OS?), droid devices (something else?). It seems to me that this will significantly change typical approaches to security in the future. For example, trojans. Why “dig” deep into systems, bypass the differentiation of OS rights, driver signatures, when you can hide in the JS code of one of the browser components without bypassing anything and having control over the main “exit”? Although it is clear why :).
Or what kind of protection for bank clients can a one-time code via SMS provide, if a Trojan that will read SMS can be installed on all user's droid devices from a compromised browser on a computer? But this is so - verbiage :).
Thank you for your attention and successful knowledge of the new!
We have discovered a vulnerability in the Internet Public Key Infrastructure (PKI) used to issue digital certificates for Web sites. As an example, we demonstrated part of the attack and successfully created a fake CA certificate that everyone trusts modern browsers. The certificate allows us to impersonate any site on the Internet that uses HTTPS, including banking and online shopping sites.
Valery Marchuk
www.site
Yesterday ended the conference 25C3 (25th Chaos Communication Congress) in Berlin. One of the loudest reports at the conference was the report of Alexander Sotirov, Marc Stevens and Jacob Appelbaum - MD5 considered harmful today: Creating a rogue CA certificate. In this article, I will briefly describe the essence of the vulnerability and try to answer possible questions.
“We have discovered a vulnerability in the Internet Public Key Infrastructure (PKI) used to issue digital certificates for Web sites. As an example, we demonstrated part of the attack and successfully created a fake CA certificate that is trusted by all modern browsers. The certificate allows us to impersonate any site on the Internet using HTTPS, including banking and online shopping sites.”
The essence of vulnerability
Many CAs still use MD5 hashes to authenticate certificates. Since 2004, it has been reliably known that MD5 hashes are weak from a cryptographic point of view. An attacker can create a fake certification authority (CA) proxy certificate, and with it, sign an arbitrary number of certificates, for example, for Web servers, which will be considered trusted by the root certificates - those that are in the "trusted list" in your browser. Alexander Sotirov, Mark Stevens and Jacob Appelbaum managed to create a fake certificate posing as a genuine certificate from RapidSSL. To generate a fake certificate, 4 purchases of valid certificates were made from RapidSSL, and a cluster of 200 Sony PlayStation 3 stations was used for a collision attack. The attack is based on the method of detecting collisions in MD5 hashes. AT this moment the attack is considered difficult to implement, but demonstrated in practice.
The researchers collected 30,000 certificates for Web servers, 9,000 of which were MD5-signed, 97% of the certificates belonged to RapidSSL.
Vulnerability Impact
An attacker can perform a man-in-the-middle attack, impersonate a trusted host, and intercept potentially sensitive data. To perform the necessary calculations, attackers can use a medium-sized botnet and get the necessary results in a fairly short time.
Vulnerable protocols
The vulnerability applies to all protocols using SSL:
- HTTPS
- SSL VPN
- S-MIME
SSH is not vulnerable to this attack.
Companies issuing vulnerable certificates
- RapidSSL
C=US, O=Equifax Secure Inc., CN=Equifax Secure Global eBusiness CA-1 - FreeSSL (free temporary certificates offered by RapidSSL)
C=US, ST=UT, L=Salt Lake City, O=The USERTRUST Network, OU=http://www.usertrust.com, CN=UTN-USERFirst-Network Applications - TC TrustCenter AG
C=DE, ST=Hamburg, L=Hamburg, O=TC TrustCenter for Security in Data Networks GmbH, OU=TC TrustCenter Class 3 CA/ [email protected] - RSA Data Security
C=US, O=RSA Data Security, Inc., OU=Secure Server Certification Authority - thawte
C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/ [email protected] - verisign.co.jp
O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server CA - Class 3, OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign
Attack scenario
An attacker requests a legitimate certificate for a Web site from a commercial certification authority (CA) that is trusted by browsers. Since the request is legitimate, the CA signs the certificate and issues it to the attacker. The attack uses a CA that uses the MD5 algorithm to generate signatures for certificates. The second certificate is a CA proxy certificate that can be used to issue certificates for other sites. Since the MD5 hashes of both certificates - valid and fake - are the same, digital signature, obtained from a commercial CA, can simply be copied into a fake CA certificate, which will remain valid.
Below is a schematic diagram of how certificates for Web sites should work and a description:
- The CA issues its root certificate and distributes it through the browser vendors to the clients. These root certificates are in the "trusted list" on the user's system. This means that all certificates issued by this CA will be trusted by the user by default.
- A company that wants to protect the users of its site acquires a certificate for the Web site from a certificate authority. This certificate is signed by the CA and guarantees the identity of the Web site to the user.
- When a user wishes to visit a secure Web site, the browser requests a certificate from the Web server. If the signature is confirmed by a CA certificate in the list of trusted certificates, the site will be loaded into the browser and data exchange between the site and the browser will be encrypted.
The following diagram demonstrates a substitution attack scenario. existing Web site.
- A legitimate certificate for the Web site is purchased from a commercial CA (blue certificate in the diagram)
- A fake CA certificate is generated (black in the diagram). It contains the same signature as the certificate issued for the Web site, so the browser assumes that the certificate was issued by a valid CA.
- Using the fake CA, the attacker creates and signs a new certificate for the Web site (red in the diagram) with a new public key. A copy of the trusted site is created, hosted on a Web server with a fake certificate.
When a user visits a secure site, the browser searches for the Web site. Exist various ways, with which an attacker can redirect the user to a specially crafted Web site. This Web site will provide the user with a fake certificate, along with a fake CA certificate. The fake certificate for the Web site is authenticated by the fake CA certificate, which in turn will be authenticated by the root CA certificate. The browser will agree to accept such a certificate and the user will not notice anything.
Attack vectors
An attacker can carry out a man-in-the-middle attack and intercept the target user's traffic. Possible attack vectors:
LAN attack:
- Unsafe wireless networks
- ARP spoofing
- Automatic discovery of proxy servers
Remote attack:
- DNS spoofing
- Router Compromise
How dangerous is this vulnerability?
The existing problem allows you to create ideal phishing sites with valid SSL certificates. An attacker will be able to fool even a professional by choosing a plausible name for the certificate authority. With the ability to perform a man-in-the-middle attack, an attacker will be able to redirect traffic to a specially crafted server and gain access to potentially sensitive data without the user noticing. Site owners that use SSL certificates will not be able to protect their customers in any way. Even if the certificate for the Web site is signed with the SHA1 algorithm, an attacker can still use a fake MD5 certificate.
What are the means of protection?
In fact, users cannot do much. The problem is not with browsers or SSL, but with CAs.
- As a workaround, it is recommended that you limit the number of CAs you trust as much as possible and exclude the CAs listed above from the list of trusted CAs.
To make sure that https traffic is not re-encrypted antivirus programs or proxy, use one of the methods below:
1. Go to the diagnostics portal, in the "Check connection" section (https://help.kontur.ru/check). If one of the lines is not checked, and the value in the "Certificate" column is highlighted in red, then the certificate is being replaced. The name of the program that replaces the certificate is indicated in the "Publisher" column.
2. Go to the site https://auth.kontur.ru, left-click on the "lock" icon (near the address bar) or right click click on the page > Properties > Certificates and check which server certificate is offered. The certificate should be like this:
- Issued to: *.kontur.ru
- Issued By: RapidSSL SHA256 CA
- Serial number: 48 61 59 21 53 c2 cf cd e2 0c f8 ec 70 a1 9d 67
or like this:
- Issued to: *.kontur.ru
- Issued by: GlobalSign Domain Validation CA - SHA256 - G2
- Serial number: 13 0b ab d5 ec ff a6 f0 71 ae 5a 36
If the certificate is different, then the https traffic is being re-encrypted. The name of the program that replaces the certificate is indicated in the "Issued by" line.
Determine which program issued the certificate and configure it so that it does not interfere with the operation of our services. Instructions for configuring programs that can affect certificate spoofing are provided below.
Avast antivirus
Disable active protection for https sites. To do this, right-click on the Avast icon in the notification area and select "Open Avast User Interface".
Go to Menu > Settings.
Select the section Protection > Basic protection components > Web protection.
Uncheck the following items:
- Enable HTTPS scanning;
- Enable script scanning.
ESET Antivirus internet security
Disable https protocol filtering in ESET Internet Security . To do this, open ESET > Setup > Advanced options.
Go to the Internet section and Email> Internet access protection > Web protocols > Module settings > uncheck "Enable HTTPS protocol check".
Antivirus Kaspersky Internet Security
Depending on the version of Kaspersky, some interface elements may differ.
1. Disable add-ons Kaspersky Internet security. For this in Internet Explorer select the Tools tab > Configure Add-ons.
In the "Display" section, select "All add-ons". Find the Kaspersky Protection and Kaspersky Protection Toolbar add-ons in the list, right-click on the line and select Disable.
2. Disable the injection of Kaspersky scripts and checking of secure connections. To do this, open Kaspersky Internet Security > Settings > Advanced > Network > Inject script into traffic to interact with web pages.
3. Open Kaspersky Internet Security > Settings > Protection > Web Anti-Virus (or Web Protection).
Open Advanced settings and disable "Automatically activate the Kaspersky Protection extension in browsers".
4. Disable checking HTTPS connections. For this Open Kaspersky Internet Security > Settings > Advanced settings > Network. In the block " Checking secure connections » disable the option " Check secure connections » .
Dr.Web anti-virus
Disable HTTPS connection checking. To do this, open Dr.Web > Settings > General > Network > Secure connections. Set the switch " Check encrypted traffic" to the "off" position. » .
AdGuard program
Disable HTTPS connection checking. If you have the AdGuard add-on installed in your browser, you don't need to disable anything for it. If it is not installed, then open the AdGuard » > Settings > General settings> uncheck the box "Filter HTTPS protocol».
AVG Antivirus
Disable HTTPS connection checking. To do this, open AVG Antivirus > Settings > Components > Web Protection or Online Shield > Settings > uncheck the box "Enable HTTPS scanning".