
Why bother and spend money and resources on security? Why bother staging Security Development Lifecycle (SDL)? Why integrate fuzzing into the development process? Why to occupy the head with knowledge of various fuzzers like AFL, libfuzz, etc.? After all, it is possible to “simply” turn the search for vulnerabilities in their products into continuous torment and arrange a “sweet” life for researchers and intruders. Want to know how to do this? Then welcome under the cat!
Disclaimer: This article should be taken with a certain amount of humor and irony!Recently, there are more and more works devoted to the topic of AntiFuzzing'a. AntiFuzzing is an action that reduces fuzzing's effectiveness and benefit in finding vulnerabilities in the solution (s) of the developer.
')
The article focuses on fuzzing binary applications written in C / C ++, which can be deployed locally, and try to find vulnerabilities in them related to memory corruption.
Today, a large number of actions are aimed against AFL Fazzer, as the most prominent, well-known and proven representative of the feedback based fuzzing approach.
After examining the problem, we identified possible AntiFuzzing techniques:
- Jamming the results of a fuzzer is the most eccentric technique that some developers are adopting without even realizing it) It is that in order to make the program safer, you need to add more bugs there ... Yes, we, unfortunately, cannot answer the question is how many of our bugs are in the program and how dangerous they are, but we can dilute them with a bunch of useless errors for the attacker!
- Detecting the fuzzing process is all from the jailbreak_detect, root_detect area. The application determines itself (and the developer writes a series of checks) that it does not just work, but is phased and, as a result, refuses to work. The information security industry has been doing this a million times. This code is looked for and excluded from the application quite easily, and the technique leads in the rank of “the most useless and unsophisticated”.
- Slowing down the process of fuzzing - inside our company we call such things “hide bugs in overheads”. Even now, some software works badly, not only under fuzzing, so finding vulnerabilities in it becomes a psychologically complex task for researchers)
- Creating blindspot zones is the most interesting direction, which, in our opinion, will move the evolution of fuzzers. So, in the work presented on BlackHat 2018, the problem of collisions in shared_mem in AFL, which uses it to determine covered code areas, is raised. That is, areas are created where the fuzzer does not fall in its work.
Thus, AntiFuzzing has both obvious advantages and disadvantages:
- "-" It is possible that the developers of software who are not well versed in some aspects of information security and the fuzzing process are clouded.
- "+" Evolution of fuzzers, which in the future will begin to overcome the implemented AntiFuzzing mechanisms and will provide greater coverage first, if there are embedded AntiFuzzing mechanisms; secondly, when there are elements in the software that simulate AntiFuzzing functions.
Why use this approach for security is stupid and harmful? The development of a high-quality AntiFuzzing approach and its application to real software is comparable in complexity to the development of the algorithm itself to increase code coverage with feedback based fuzzing. The difficulty is that, in addition to interposing the phasing construction in the right places, it is necessary to make sure that they do not have a clear pattern that can be distinguished, and then simply deleted. AntiFuzzing does not increase the security of the application itself ... It’s good that for now AntiFuzzing’s research is done only in an academic environment. At the same time, there are companies that, on the contrary, are focused on simplifying the search for bugs. For example, Mozilla provide for this a special assembly of their browser
blog.mozilla.org/security/2018/07/19/introducing-the-asan-nightly-project !

A surge of interest in AntiFuzzing'u caused primarily DARPA Cyber ​​Grand Challenge 2016. This is a competition where computers without the help of a person looking for vulnerabilities, exploited and patched them. At the heart of most search engines, as you might have guessed, was the AFL fuzzer, and all the targets in the competition were binary applications. All this can be directed at countering automatic systems, not people.
This article is based on the works that you can study on your own:
- "Escaping the Fuzz, Evaluating the Fuzzing Techniques and Fooling them with AntiFuzzing" , Master's thesis in Computer Systems and Networks
2016 - Chaff Bugs: Deterring Attackers by Making Software Buggier , 2018
- "AFL's Blindspot and Resist AFL Fuzzing for Arbitrary ELF Binaries" , BlackHat USA 2018
- We also recommend that you read the article from the NCC Group “Introduction to Anti-Fuzzing: A Defense in Depth Aid” from 2014 (the first AFL release has just appeared and has not yet won the great love of the community, and another 2 years before the DARPA CGC final).
PS: We often work with AFL (+ libfuzz) and its modifications when researching software and implementing SDL to our clients. Therefore, in one of the following articles we will talk more about fuzzing'a with the help of AFL and why more and more people use it in testing programs and how it increases the level of development safety.