Secureum Book
  • 🛡️Secureum Bootcamp
    • 🛡️Secureum Bootcamp
    • 🙌Participate
    • 📜History
  • 📚LEARN
    • Introduction
      • 🔷1. Ethereum Basics
        • 1.1 Ethereum: Concept, Infrastructure & Purpose
        • 1.2 Properties of the Ethereum Infrastructure
        • 1.3 Ethereum vs. Bitcoin
        • 1.4 Ethereum Core Components
        • 1.5 Gas Metering: Solving the Halting Problem
        • 1.6 web2 vs. web3: The Paradigm Shift
        • 1.7 Decentralization
        • 1.8 Cryptography, Digital Signature & Keys
        • 1.9 Ethereum State & Account Types
        • 1.10 Transactions: Properties & Components
        • 1.11 Contract Creation
        • 1.12 Transactions, Messages & Blockchain
        • 1.13 EVM (Ethereum Virtual Machine) in Depth
        • 1.14 Transaction Reverts & Data
        • 1.15 Block Explorer
        • 1.16 Mainnet & Testnets
        • 1.17 ERCs & EIPs
        • 1.18 Legal Aspects in web3: Pseudonymity & DAOs
        • 1.19 Security in web3
        • 1.20 web2 Timescales vs. web3 Timescales
        • 1.21 Test-in-Prod. SSLDC vs. Audits
        • Summary: 101 Keypoints
      • 🌀2. Solidity
        • 2.1 Solidity: Influence, Features & Layout
        • 2.2 SPDX & Pragmas
        • 2.3 Imports
        • 2.4 Comments & NatSpec
        • 2.5 Smart Contracts
        • 2.6 State Variables: Definition, Visibility & Mutability
        • 2.7 Data Location
        • 2.8 Functions
        • 2.9 Events
        • 2.10 Solidity Typing
        • 2.11 Solidity Variables
        • 2.12 Address Type
        • 2.13 Conversions
        • 2.14 Keywords & Shorthand Operators
        • 2.15 Solidity Units
        • 2.16 Block & Transaction Properties
        • 2.17 ABI Encoding & Decoding
        • 2.18 Error Handling
        • 2.19 Mathematical & Cryptographic Functions
        • 2.20 Control Structures
        • 2.21 Style & Conventions
        • 2.22 Inheritance
        • 2.23 EVM Storage
        • 2.24 EVM Memory
        • 2.25 Inline Assembly
        • 2.26 Solidity Version Changes
        • 2.27 Security Checks
        • 2.28 OpenZeppelin Libraries
        • 2.29 DAppSys Libraries
        • 2.30 Important Protocols
        • Summary: 201 Keypoints
      • 🔏3. Security Pitfalls & Best Practices
        • 3.1 Solidity Versions
        • 3.2 Access Control
        • 3.3 Modifiers
        • 3.4 Constructor
        • 3.5 Delegatecall
        • 3.6 Reentrancy
        • 3.7 Private Data
        • 3.8 PRNG & Time
        • 3.9 Math & Logic
        • 3.10 Transaction Order Dependence
        • 3.11 ecrecover
        • 3.12 Unexpected Returns
        • 3.13 Ether Accounting
        • 3.14 Transaction Checks
        • 3.15 Delete Mappings
        • 3.16 State Modification
        • 3.17 Shadowing & Pre-declaration
        • 3.18 Gas & Costs
        • 3.19 Events
        • 3.20 Unary Expressions
        • 3.21 Addresses
        • 3.22 Assertions
        • 3.23 Keywords
        • 3.24 Visibility
        • 3.25 Inheritance
        • 3.26 Reference Parameters
        • 3.27 Arbitrary Jumps
        • 3.28 Hash Collisions & Byte Level Issues
        • 3.29 Unicode RTLO
        • 3.30 Variables
        • 3.31 Pointers
        • 3.32 Out-of-range Enum
        • 3.33 Dead Code & Redundant Statements
        • 3.34 Compiler Bugs
        • 3.35 Proxy Pitfalls
        • 3.36 Token Pitfalls
        • 3.37 Special Token Pitfalls
        • 3.38 Guarded Launch Pitfalls
        • 3.39 System Pitfalls
        • 3.40 Access Control Pitfalls
        • 3.41 Testing, Unused & Redundand Code
        • 3.42 Handling Ether
        • 3.43 Application Logic Pitfalls
        • 3.44 Saltzer & Schroeder's Design Principles
        • Summary: 201 Keypoints
      • 🗜️4. Audit Techniques & Tools
        • 4.1 Audit
        • 4.2 Analysis Techniques
        • 4.3 Specification, Documentation & Testing
        • 4.4 False Positives & Negatives
        • 4.5 Security Tools
        • 4.6 Audit Process
        • Summary: 101 Keypoints
      • ☝️5. Audit Findings
        • 5.1 Criticals
        • 5.2 Highs
        • 5.3 Mediums
        • 5.4 Lows
        • 5.5 Informationals
        • Summary: 201 Keypoints
  • 🌱CARE
    • CARE
      • CARE Reports
  • 🚩CTFs
    • A-MAZE-X CTFs
      • Secureum A-MAZE-X
      • Secureum A-MAZE-X Stanford
      • Secureum A-MAZE-X Maison de la Chimie Paris
Powered by GitBook
On this page
  • Read Specification and Documentation
  • Fast Tools
  • Manual Analysis
  • 1 Access Control in Manual Review
  • 2 Asset Flow in Manual Review
  • 3 Control Flow in Manual Review
  • 4 Data Flow in Manual Review
  • 5 Inferring Constraints in Manual Review
  • 6 Dependencies in Manual Review
  • 7 Assumptions in Manual Review
  • 8 Checklists for Manual Review
  • Slow/Deep Tools
  • Discussing with other Auditors
  • Discussing with the Project Team
  • Writing the Report
  • Exploit Scenarios
  • Likelihood & Impact
  • Delivering the Report
  • Evaluating Fixes
  1. LEARN
  2. Introduction
  3. 4. Audit Techniques & Tools

4.6 Audit Process

Let's now talk about the audit process. This is critical to understanding the different stages in the lifecycle of an audit from an auditor's perspective. It helps us understand what the auditors do at those different stages, how do they focus their efforts, interact with each other, interact with the project team and what the deliverables are at different stages of the audit lifecycle.

This process is going to be very different for every audit firm and very different even perhaps for different audits. Generalizing, an audit process can be thought of as a 10-step process:

  1. Reading the specification and documentation of the project to understand the requirements, design and architecture behind all the different aspects of the project.

  2. Run fast automated tools such as linters or static analyzers to investigate some of the common security pitfalls or missing smart contact best practices that we have discussed.

  3. Manually analyzing the code to understand the business logic aspects and detect vulnerabilities in it.

  4. This could be followed by running slower, but more deeper automated tools such as the symbolic checkers, fuzzers or formal verification tools (some of which we have discussed in this chapter). These typically require formulation of the properties or constraints beforehand, hand-holding during the analysis and even some post-processing of the results.

  5. These stages may involve auditors discussing with other auditors the findings from all the tools and the manual analysis to identify any false positives or missing analysis.

  6. The auditors may also convey the status to the project team for clarifying any questions on the business logic, threat model or other aspects. All these aspects may be iterated as many times as possible within the duration of the audit, so as to leave some time at the end for writing the report.

  7. Writing the report itself involves summarizing the details on the findings and recommendations.

  8. The audit team delivers that report to the project team.

  9. The team discusses the findings the severities and the potential fixes that are possible

  10. There's also a step here where the audit team evaluates fixes from the project team for any of the findings reported, then they verify that those fixes indeed remove the vulnerabilities identified in those findings.

This is how a typical audit process may look like. Let's now dive in to discuss some details about each of these steps.

Read Specification and Documentation

The first step in the audit process is typically reading the specification and documentation (for projects that have a specification of the design and architecture of their smart contracts). This is indeed the recommended starting point, however very few new projects have a specification at least at the audit stage. Some of them have documentation in certain parts.

Remember the differences between the two:

  • Specification starts with the project's technical goals, business goals and requirements. It describes how the project's design and architecture help achieve those goals. The actual implementation of the smart contracts is a functional manifestation of these goals, requirements, specification, design and architecture. Understanding all these is critical in evaluating if the implementation indeed meets the goals and requirements.

  • Documentation on the other hand is a description of what has been implemented based on the design and architectural requirements.

So while specification answers the "why" aspect of how something needs to be designed, architected and implemented, documentation on the other hand answers the "how" aspect: if something has been designed, architected and implemented without necessarily addressing the "why", aspect and leaves it up to the auditors to speculate on the reasons.

Documentation remember is typically in the form of README files describing individual contract functionality combined with some functional NatSpec and individual comments within the code itself.

Encouraging projects to provide a detailed specification and documentation saves a lot of time and effort for the auditors in understanding the project's goal structure and prevents them from making the same assumptions as the implementation, which is perhaps a leading cause of vulnerabilities.

In the absence of both specification and documentation, auditors are forced to infer those aspects (such as the goals, requirements, design and architecture) from reading the code itself and using tools such as Surya or the Slither printers that we discussed earlier. Identifying the key assets, actors and actions in the application logic from the codebase that is required for understanding the trust and threat models is a complex and involved task. All this takes up a lot of time without the presence of a detailed and accurate specification leaving very less time for the auditors to perform deeper and more complex security analysis.

Fast Tools

Auditors typically also use some fast tools such as linters or static analyzers that perform their analysis and finish running within seconds. Automated tools such as these help investigate common security pitfalls at the Solidity or EVM levels, and detect missing smart contract best practices.

Such tools implement control flow and data flow analysis on smart contracts in the context of their detectors, which encode such common pitfalls and best practices. Evaluating their findings which are usually available within seconds or few minutes is a good starting point to detect common vulnerabilities based on well-known constraints or properties of the Solidity language or the EVM itself.

False positives are possible among some of the detector findings, which need to be verified manually to check if there are true or false posters. These tools can also miss certain findings leading to false negatives. Best examples of static analyzers in this space are Slither and Maru, both of which we have touched upon in the earlier slides of this module.

Manual Analysis

Manual analysis is perhaps the most critical aspect of smart contract audits today. Manual code review is required to understand business logic and detect vulnerabilities in it. Automated analyzers can't understand application level logic and infer their constraints and so, are limited to constraints and properties of the Solidity language or the EVM itself.

Manual analysis of the code is therefore required to detect security relevant deviations in the implementation from those captured in the specification or documentation. In the absence of specification or documentation auditors will be forced to infer business logic and their implied constraints directly from the code itself or from discussions with the project team, and only thereafter evaluate if those constraints or properties hold in all parts of the codebase.

Auditors have different approaches to manually reviewing smart contracts for vulnerabilities they may be along the lines of starting with access control, asset flow (or control flow), data flow, inferring constraints, understanding dependencies, evaluating assumptions and evaluating security checklists.

Auditors may start with one of these as their preferred approaches, then combine multiple of them for best results these are very subjective aspects, but we will explore them in some detail to understand what they make there.

1 Access Control in Manual Review

Starting with access control is very helpful because access control, as we've discussed, is the most fundamental security primitive. It addresses who has authorized access to what, or which actors have access to what assets.

Although the overall philosophy might be that smart contracts are permissionless, in reality they do indeed have different permissions or roles for different actors (or use them at least during their initial guarded launch). The general classification is that of users and admins (and sometimes even a role based access control).

Privileged roles typically have control over critical configuration and application parameters including emergency transfers and/or withdrawals of contact funds. Such access control is typically enforced in modifiers as we have discussed in the earlier chapters, and also more generally with the visibility of functions such as public and external versus internal or private, which were also discussed in the context of Solidity.

Therefore, starting with understanding the access control implemented by smart contracts and checking if they have been applied correctly, completely and consistently is a good approach to detecting violations, which could be critical vulnerabilities.

2 Asset Flow in Manual Review

One can also start with asset flow. Assets are Ether, ERC20, ERC721 or other tokens managed by smart contracts. Given that exploits target assets of value, it makes sense to start evaluating the flow of assets into, outside, within and across smart contracts and their dependencies. The questions of "who", "when", "which", "why", "where", "what type" and "how much" are the ones to be asked.

  • For "who", assets should be withdrawn and deposited only by authorized specified addresses as per application logic.

  • for "when", assets should be withdrawn deposited only in authorized specified time windows, or under authorized specified conditions as per application logic.

  • for "which" assets, only those authorized specified types should be withdrawn and deposited.

  • for "why", assets should be withdrawn deposited only for authorized specified reasons as per application logic.

  • for "where", assets should be withdrawn and deposited only to authorized specified addresses as per application logic.

  • for "what type", assets only of authorized specified types should be withdrawn and deposited as per applicatione logic.

  • for "how much", assets only in authorized specified amounts should be allowed to be withdrawn and deposited, again as per the application logic.

So these are all the various aspects of asset flow that need to be evaluated.

3 Control Flow in Manual Review

Evaluating control flow is a fundamental program analysis approach. Control flow analyzes the transfer of control, that is, the execution order across and within smart contracts.

Inter procedural control flow, where the procedure is just another name for a function, is typically indicated by a polygraph which shows which functions (or callers) call which other functions or colleagues across or within smart contracts.

Intra procedural control flow that is within a function is dictated by conditionals: the if-else constructs, loops (for while-do, continue, break constructs) and return statements.

Both intra and inter-procedural control flow analysis help track the flow of execution and data in smart contracts, and therefore is a fundamental program analysis approach to evaluate security aspects.

4 Data Flow in Manual Review

Evaluating data flow is another fundamental aspect of program analysis which analyzes the transfer of data across and within smart contracts.

Inter-procedural data flow is evaluated by analyzing the data (the variables and constants used as argument values for function parameters) at call sites and their corresponding return values.

Intra procedural data flow on the other hand is evaluated by analyzing the assignment and use of variables or constants stored in storage, memory, stack and call data locations along the control flow paths within functions.

Both intra and inter procedural data flow analysis help tracking the flow of global or local storage and memory changes in smart contracts, and given that data flows where control flows, they work together to help with program analysis of smart contracts in helping detect security vulnerabilities.

5 Inferring Constraints in Manual Review

Inferring constraints is an approach that is almost always required. Program constraints are basically rules that should be followed by the program. Solidity level and EVM level security constraints are well known because they're part of the language and EVM specification, however application level constraints are rules that are implicit to the business logic implemented and may not be explicitly described in the specification.

An example of such a constraint may be to mint an ERC721 token to an address when it makes a certain deposit of ERC20 tokens to the smart contract, and burn it when it withdraws the earlier deposit. Such business logic specific application level constraints may have to be inferred by auditors while manually analyzing the smart contract code.

Another approach to inferring program constraints without having to understand the application logic is to evaluate what is being done on most program paths related to a particular logic, and treat that as a constraint. If such a constraint is missing on one or few program paths, then that could be an indicator of a vulnerability assuming that the constraint is securely related, or those could simply mean that such program paths are exceptional conditions where the constraints do not need to hold.

6 Dependencies in Manual Review

Understanding dependencies is another critical approach to manual analysis. Dependencies exist when the correct compilation (or functioning) of program code relies on code (or data) from other smart contracts that were not necessarily developed by the project team.

Explicit program dependencies are captured in the import statements and give rise to the inheritance hierarchy. For example, many projects use the community developed audited and time tested libraries from OpenZeppelin for tokens, access control, Proxy, security, etc... Composability in web3 is expected and even encouraged via smart contracts interfacing with other protocols and vice versa, which results in emergent or implicit dependencies on the state and logic of external smart contracts, via Oracless for example.

This is especially of interesting concern for DeFi protocols that rely on other related protocols for stable coins, yield generation, borrowing, lending, derivatives, Oracles, etc... Assumptions on the functionality and correctness of such dependencies need to be reviewed for potential security impacts.

7 Assumptions in Manual Review

A meta level approach is that of evaluating assumptions. Many security vulnerabilities result from faulty assumptions such as who can access what, and when, under what conditions, for what reasons, etc... Identifying the assumptions made by the program code and verifying if they are indeed correct can be the source of many audit findings.

Some common examples of faulty assumptions are "only admins can call these functions", "initialization functions will only be called once by the contract deployer" (which is relevant for upgradable contracts), "functions will always be called in a certain order" (as expected by the specification), "parameters can only have non-zero values or values within a certain threshold" (for example addresses will never be zero value), "certain addresses or data values can never be attacked and controlled", "they can never reach program locations where they can be misused" (in program analysis literature this is known as state analysis) or "function calls will always be successful" (and so, checking for return values is not required).

8 Checklists for Manual Review

The final approach to manual analysis is the one we are using in this bootcamp, which is that of evaluating security checklists. Checklists are lists of itemized points that can be quickly and methodically followed, and can be referenced later by their list number to make sure all listed items have been processed according to the domain of relevance.

To add some context for those who aren't aware of the significance of checklists, this checklist-based approach was made popular in the book The Catalyst Manifesto: How to get things right by Atul Gawande, who is a noted surgeon, writer and public health leader.

This idea is best summarized in the review of his book by Malcolm Gladwell who writes that Gawande begins by making a distinction between errors of ignorance (mistakes we make because we don't know enough) and errors of ineptitude (mistakes we make because we don't make proper use of what we do). Failure in the modern world, he writes, is about the second of these errors and he walks us through a series of examples from medicine showing how the routine tasks of surgeons have now become so incredibly complicated that mistakes are one kind or another are virtually inevitable. It's just too easy for an otherwise competent doctor to misstep, or forget to ask a key question, or in the stress and pressure of the moment to fail to pan properly for every eventuality.

Gawande then visits pilots and the people who build skyscrapers, and comes back to the solution experts need: checklists. Literally written guides that walk them through the key steps in any complex procedure. In the last section of the book, Gawande shows how his research team has taken this idea: they developed a safe surgery checklist and applied it around the world with staggering success.

So this glorifying review should hopefully motivate a better appreciation for checklists and to apply this to our context. Consider the mind-boggling complexities of the fast evolving Ethereum infrastructure: new platforms, new languages, new tools, new protocols, the risks associated with deploying smart contracts, managing billions of dollars... There are so many things to get right with smart contracts that it is easy to miss a few checks, make incorrect assumptions or fail to consider potential situations.

Checklists are known to increase retention and have a faster recall. The hypothesis therefore is that smart contract security experts need checklist too. Smart contract security checklist, such as the security chapter we have discussed earlier in this bootcamp will help in navigating the vast number of key aspects to be remembered, recalled and applied with respect to the pitfalls and best practices. They will help in going over the itemized features, concepts, pitfalls, best practices and examples in a methodical manner without missing any items. They will also help in referencing specific items of interest.

Slow/Deep Tools

In contrast to the fast tools that we discussed earlier, the slow (or deeper) tools fall in the categories of Fuzzing, symbolic checking or formal verification. Running such deeper automated tools (fuzzers such as Echidna, symbolic checkers such as Manticore, Mythril tool suite such as MythX or formally verifying custom properties with Scribble or Certora Prover) takes more understanding and preparation time to formulate such custom properties, but helps run deeper analysis which may take minutes to run.

They help discovering edge cases in application level properties and mathematical errors, among other things. Doing so requires understanding of the project's application logic. Such tools are recommended to be used at least after an initial manual code review or sometimes after deeper discussions about the specification and implementation with the project team itself.

Also analyzing the output of these tools requires significant expertise with the tools themselves, their domain specific language and sometimes even their inner workings to interpret their findings. Evaluating false positives is sometimes challenging with these tools, but the true positives they discover are typically significant and extreme corner cases even by the best manual analysis.

Discussing with other Auditors

Brainstorming with other auditors is often helpful. Given enough eyeballs, all bugs are shallow is a premise that is referred to as Linus' law. This might apply with auditors too: if they brainstorm on the smart contract implementation assumptions, findings and vulnerabilities.

While some audit firms encourage active or passive discussion, there may be others whose approach is to let auditors separately perform the assessment to encourage independent thinking instead of group thinking. The premise is that group thinking might bias the auditing to focus only on certain aspects and not others, which may lead to missing detection of some vulnerabilities and therefore affects the effectiveness.

A hybrid approach may be interesting where the auditing initially brainstorms to discuss the project goals, specification, documentation and implementation, but later independently pursue the assessments. Finally, auditors come together to compile their findings.

Finding a balance between the overhead of such an approach, the benefits of such an overlapping effort may be an interesting consideration.

Discussing with the Project Team

Discussion with the project team is another critical part of the audit process. Having an open communication channel with the project team is useful to understand their scope, trust and threat models, any specific concerns to clarify, any assumptions in specification, documentation, implementation or to discuss interim findings.

Findings may also be shared with the project team immediately on a private repository to discuss impact, fixes and other implications without waiting to discuss it at the end of the audit period. If the audit spans multiple weeks, it may also help to have a weekly sync-up call for such discussions and updating the status.

A counter point to this is to independently perform the entire assessment, so as to not get biased by the project teams' inputs and opinions, which may steer the auditors in certain directions potentially without letting them pay attention to other aspects.

Writing the Report

An audit report is a tangible deliverable at the end of an audit and therefore report writing becomes a very critical aspect of the entire audit process. The audit report is a final compilation of the entire assessment and presents all aspects of the audit including the audit scope, coverage, timeline, team effort, summaries, tools, techniques, findings, exploit scenarios, suggested fixes, short-term and long-term recommendations, and any appendices with further details of tools and rationale.

An executive summary typically gives an overview of the audit report with highlights, lowlights, illustrating the number, type and severity of vulnerabilities found, and an overall assessment of risk. It may also include a description of the smart contracts actors, assets, roles, permissions, access control, interactions, threat model and existing risk mitigation measures.

The bulk of the report focuses on the findings of the audit: their type, category, likelihood, impact, severity, justifications for these ratings, potential exploit scenarios, affected parts of smart contracts and potential remediations. It may also address subjective aspects of code quality, readability, auditability and other software engineering best practices related to the documentation, code structure, function variable naming conventions, test coverage, etc... That do not immediately pose a security risk, but are indicators of anti-patterns and processes influencing the interruption and persistence of security vulnerabilities.

The audit report should be articulate in terms of all these information and also actionable for the project team to address all raised concerns.

Exploit Scenarios

Presenting proof of concept exploit scenarios could be a part of certain audits. Remember that exploits are incidents where vulnerabilities are triggered by malicious actors to misuse smart contracts resulting, for example, in stolen or frozen assets.

Presenting proofs of concept of such exploits, either in code or written descriptions of hypothetical scenarios, make audit findings more realistic and relatable by illustrating specific exploit paths and justifying the severity of findings.

It goes without saying that an exploit should always be on a testnet, kept private and responsibly disclosed to project teams without any risk of being actually executed on live systems, resulting in real loss of funds.

Access descriptive exploit scenarios should make realistic assumptions on roles, powers of actors, practical reasons for their actions and sequencing of events that trigger vulnerabilities and illustrate the paths to exploitation.

Likelihood & Impact

We have talked about estimating likelihood impact and severity of the findings.

  • Likelihood indicates the probability of a vulnerability being discovered by malicious actors and triggered to successfully exploit the underlying weakness.

  • Impact indicates a magnitude of implications on the technical and business aspects on the system: if the vulnerability were to be exploited.

Severity, as per OWASP, is a combination of likelihood and impact with reasonable evaluations of those two severity estimates, which whould be straightforward to estimate given the OWASP matrix.

However estimating, if likelihood (or impact) is low, medium or high is not trivial in many cases. If the exploit can be triggered by a few transactions manually without requiring much resources or access (for example not being an admin) and without assuming many conditions to hold true, then the likelihood is evaluated as high.

Exploits that require deep knowledge of the system workings, privileged roles, large resources or multiple edge conditions to hold true are evaluated as medium likelihood. others that require even harder assumptions to hold true, such as minor collusion, chain forks or insider collusion for example, are considered as low likelihood.

If there is any loss or locking up of funds, then the impact is evaluated as high. Exploits that do not affect funds, but disrupt the normal functioning of the system are typically evaluated as medium, and anything else is of low impact. Some evaluations of likelihood and impact are contentious and debated sometimes between the audit and project teams, and sometimes even the security community at large. This typically happens with security conscious audit teams pressing for higher likelihood and impact while the project teams downplay the risks.

Delivering the Report

The delivery of the audit report is another important aspect in the audit process and perhaps the final milestone. When such a report is published and presented to the project, unless interim findings or status is shared, this will be the first time the project team will have access to the assessment details.

The delivery typically happens via a shared online document and is accompanied with the readout where the auditors present the report highlights to the project team for any discussion on the findings and their severity ratings. Then the project team typically takes some time to review the audit report and respond back with any counterpoints on finding severities or suggested fixes. Depending on the prior agreement the project team, the audit firm might release the audit report publicly after all required fixes have been made or the project may decide to keep it private for some reason.

Evaluating Fixes

Evaluating fixes is typically the final stage in the audit process and a very critical stage. After the findings are reported to the project team, they typically work on any required fixes, then request the audit firm for reviewing such fixes. Fixes may be applied for a majority of the findings and the review may need to confirm that applied fixes, which in some cases could be different from what was recommended, indeed mitigate the risk reported by the findings.

Some findings may also be contested as not being relevant, outside the project's threat model or simply acknowledged as being within the project's acceptable risk model. Audit firms may evaluate the specific fixes applied and confirm or deny their risk mitigation, and unless it is a fix or retainer type of audit, this phase typically takes not more than a day or two because it would usually be outside the agreed upon duration of the audit, and most audit firms generally accommodate this to help ensure the security of the project

So these are the 10 steps of an audit process that you can expect to see within an audit. Like mentioned earlier, these are generalized opinions. The specifics of these different steps (their order, the level of effort that is put in each step, the philosophies behind them...) will surely differ across different audit firms, but nevertheless this is something that's very critical that needs to be paid attention to and understood to appreciate the different steps of the audit process.

So finally to summarize, audits are a time resource and expertise bounded effort where trained experts evaluate smart contracts using a combination of automated and manual techniques to find as many vulnerabilities as possible, whose difficulty impact and severity levels might vary.

Similar to what Dijkstra once said about software testing, audits can only show the presence of vulnerabilities, but not their absence.

Previous4.5 Security ToolsNextSummary: 101 Keypoints

Last updated 1 year ago

📚
🗜️