4.2 Analysis Techniques
The analysis techniques used in audits involve a combination of different methods that are applied to the project codebase along with any accompanying specification and documentation. Many are automated analysis performed with tools with some level of manual assistance and there are generally eight broad categories:
There's specification analysis that is completely manual.
There's documentation analysis that's also manual.
There's software testing which is automated.
Static analysis again automated.
Fuzzing.
Combination.
Automated techniques.
Symbolic checking.
Formal verification s automated with some level of manual assistance.
Manual analysis (that is entirely manual).
Let's discuss each of these categories in some detail.
Static Analysis
Static analysis is a technique for analyzing program properties without actually executing the program. This contrasts to software testing, where programs are actually executed or run with different inputs to examine their behavior.
With smart contracts, static analysis can be performed on the Solidity
code directly or on the EVM bytecode, and it is usually a combination of control flow and Data Flow analysis.
Some of the most widely used static analysis tools with smart contracts are Slither (which is a static analysis tool from Trail of Bits) and Maru (which is a static analysis tool from ConsenSys Diligence), both of which analyze intermediate representations derived from Solidity
code of smart contracts.
Fuzzing
Fuzzing (or fuzz testing) is an automated software testing technique that involves providing invalid, unexpected or random data as inputs to software. This contrasts again with software testing in general where chosen and valid data is used for testing.
So, firstly these invalid, unexpected or random data are provided as inputs, then the program is monitored for exceptions such as crashes, failing built-in code assertions or potential memory leaks.
Fuzzing is especially relevant to smart contracts because anyone can interact with them on the blockchain by providing random inputs without necessarily having a valid reason to do so, or any expectation from such an interaction. This is in the context of arbitrary Byzantine fault behavior that we have discussed multiple times earlier. The widely used Fuzzing tools for smart contracts are Echidna from Trail of Bits, Harvey from ConsenSys Diligence and most recently, Foundry's Fuzz testing.
Symbolic Checking
Symbolic checking is a technique of checking for program correctness by using symbolic inputs to represent a set of states and transitions instead of using real inputs and enumerating all the individual states or transitions separately. The related concept of model checking (or property checking) is a technique for checking whether a finite state model of a system meets a given specification, and in order to solve such a problem algorithmically both the model of the system and its specification are formulated in some precise mathematical language.
The problem itself is formulated as a task in logic with the goal of solving that formula. There is decades of research and development in this domain and I would encourage anyone interested to explore the many references available. Here, for smart contracts, Manticore from Trail of Bits and Mythril from Consensys Diligence are two widely used symbolic checkers which we will touch upon in later.
Formal Verification
Formal verification is the act of proving or disproving the correctness of algorithms underlying the system with respect to a certain formal specification of a property using formal methods of mathematics.
Formal verification is effective at detecting complex bugs, which are generally hard to detect manually or using simpler automated tools. Formal verification needs a specification of the program being verified and techniques to compare the specification with the actual implementation. Some of the tools in this space are Certora's Prover and Chain Security's VerX. kEVM from Runtime Verification is a formal verification framework that models EVM semantics.
Manual Analysis
Manual analysis is complementary to automated analysis using tools. It serves a critical need in smart contact audits. Today, automated analysis using tools is cheap because it typically uses open source software that is free to use. Automated analysis is also fast deterministic and scalable, however it's only as good as the properties it is made aware of, which is typically limited to those concerning Solidity
and EVM related constraints.
Manual analysis on the other hand is expensive, slow, non-deterministic and not scalable because human expertise in smart contract security is a rare and expensive skillset today, and we are slower, more prone to error and also inconsistent. Manual analysis however is the only way today to infer and evaluate business logic and application level constraints which is where a majority of the serious vulnerabilities are being found.
Last updated