Summary: 201 Keypoints
Last updated
Last updated
(As found in Secureum's substack and )
Solidity versions: Using very old versions of Solidity prevents benefits of bug fixes and newer security checks. Using the latest versions might make contracts susceptible to undiscovered compiler bugs. Consider using one of these versions: 0.7.5, 0.7.6 or 0.8.4 . (see )
Unlocked pragma: Contracts should be deployed using the same compiler version/flags with which they have been tested. Locking the pragma (for e.g. by not using ^ in pragma solidity 0.5.10) ensures that contracts do not accidentally get deployed using an older compiler version with unfixed bugs. (see )
Multiple Solidity pragma: It is better to use one Solidity compiler version across all contracts instead of different versions with different bugs and security checks. (see )
Incorrect access control: Contract functions executing critical logic should have appropriate access control enforced via address checks (e.g. owner, controller etc.) typically in modifiers. Missing checks allow attackers to control critical logic. (see and )
Unprotected withdraw function: Unprotected (external/public) function calls sending Ether/tokens to user-controlled addresses may allow users to withdraw unauthorized funds. (see )
Unprotected call to selfdestruct: A user/attacker can mistakenly/intentionally kill the contract. Protect access to such functions. (see )
Modifier side-effects: Modifiers should only implement checks and not make state changes and external calls which violates the pattern. These side-effects may go unnoticed by developers/auditors because the modifier code is typically far from the function implementation. (see )
Incorrect modifier: If a modifier does not execute _ or revert, the function using that modifier will return the default value causing unexpected behavior. (see )
Constructor names: Before solc 0.4.22, constructor names had to be the same name as the contract class containing it. Misnaming it wouldn’t make it a constructor which has security implications. Solc 0.4.22 introduced the constructor keyword. Until solc 0.5.0, contracts could have both old-style and new-style constructor names with the first defined one taking precedence over the second if both existed, which also led to security issues. Solc 0.5.0 forced the use of the constructor keyword. (see and )
Void constructor: Calls to base contract constructors that are unimplemented leads to misplaced assumptions. Check if the constructor is implemented or remove call if not. (see )
Implicit constructor callValue check: The creation code of a contract that does not define a constructor but has a base that does, did not revert for calls with non-zero callValue when such a constructor was not explicitly payable. This is due to a compiler bug introduced in v0.4.5 and fixed in v0.6.8. Starting from Solidity 0.4.5 the creation code of contracts without explicit payable constructor is supposed to contain a callvalue check that results in contract creation reverting, if non-zero value is passed. However, this check was missing in case no explicit constructor was defined in a contract at all, but the contract has a base that does define a constructor. In these cases it is possible to send value in a contract creation transaction or using inline assembly without revert, even though the creation code is supposed to be non-payable. (see )
Controlled delegatecall: delegatecall() or callcode() to an address controlled by the user allows execution of malicious contracts in the context of the caller’s state. Ensure trusted destination addresses for such calls. (see )
Reentrancy vulnerabilities: Untrusted external contract calls could callback leading to unexpected results such as multiple withdrawals or out-of-order events. Use check-effects-interactions pattern or reentrancy guards. (see )
ERC777 callbacks and reentrancy: ERC777 tokens allow arbitrary callbacks via hooks that are called during token transfers. Malicious contract addresses may cause reentrancy on such callbacks if reentrancy guards are not used. (see )
Avoid transfer()/send() as reentrancy mitigations: Although transfer() and send() have been recommended as a security best-practice to prevent reentrancy attacks because they only forward 2300 gas, the gas repricing of opcodes may break deployed contracts. Use call() instead, without hardcoded gas limits along with checks-effects-interactions pattern or reentrancy guards for reentrancy protection. (see and )
Private on-chain data: Marking variables private does not mean that they cannot be read on-chain. Private data should not be stored unencrypted in contract code or state but instead stored encrypted or off-chain. (see )
Weak PRNG: PRNG relying on block.timestamp, now or blockhash can be influenced by miners to some extent and should be avoided. (see )
Block values as time proxies: block.timestamp and block.number are not good proxies (i.e. representations, not to be confused with smart contract proxy/implementation pattern) for time because of issues with synchronization, miner manipulation and changing block times. (see )
Integer overflow/underflow: Not using OpenZeppelin’s SafeMath (or similar libraries) that check for overflows/underflows may lead to vulnerabilities or unexpected behavior if user/attacker can control the integer operands of such arithmetic operations. Solc v0.8.0 introduced default overflow/underflow checks for all arithmetic operations. (see and )
Divide before multiply: Performing multiplication before division is generally better to avoid loss of precision because Solidity integer division might truncate. (see )
Transaction order dependence: Race conditions can be forced on specific Ethereum transactions by monitoring the mempool. For example, the classic ERC20 approve() change can be front-run using this method. Do not make assumptions about transaction order dependence. (see )
ERC20 approve() race condition: Use safeIncreaseAllowance() and safeDecreaseAllowance() from OpenZeppelin’s SafeERC20 implementation to prevent race conditions from manipulating the allowance amounts. (see )
Signature malleability: The ecrecover function is susceptible to signature malleability which could lead to replay attacks. Consider using OpenZeppelin’s . (see , and )
ERC20 transfer() does not return boolean: Contracts compiled with solc >= 0.4.22 interacting with such functions will revert. Use OpenZeppelin’s SafeERC20 wrappers. (see and )
Incorrect return values for ERC721 ownerOf(): Contracts compiled with solc >= 0.4.22 interacting with ERC721 ownerOf() that returns a bool instead of address type will revert. Use OpenZeppelin’s ERC721 contracts. (see )
Unexpected Ether and this.balance: A contract can receive Ether via payable functions, selfdestruct(), coinbase transaction or pre-sent before creation. Contract logic depending on this.balance can therefore be manipulated. (see and )
fallback vs receive(): Check that all precautions and subtleties of fallback/receive functions related to visibility, state mutability and Ether transfers have been considered. (see and )
Dangerous strict equalities: Use of strict equalities with tokens/Ether can accidentally/maliciously cause unexpected behavior. Consider using >= or <= instead of == for such variables depending on the contract logic. (see )
Locked Ether: Contracts that accept Ether via payable functions but without withdrawal mechanisms will lock up that Ether. Remove payable attribute or add withdraw function. (see )
Dangerous usage of tx.origin: Use of tx.origin for authorization may be abused by a MITM malicious contract forwarding calls from the legitimate user who interacts with it. Use msg.sender instead. (see )
Contract check: Checking if a call was made from an Externally Owned Account (EOA) or a contract account is typically done using extcodesize check which may be circumvented by a contract during construction when it does not have source code available. Checking if tx.origin == msg.sender is another option. Both have implications that need to be considered. (see )
Deleting a mapping within a struct: Deleting a struct that contains a mapping will not delete the mapping contents which may lead to unintended consequences. (see )
Tautology or contradiction: Tautologies (always true) or contradictions (always false) indicate potential flawed logic or redundant checks. e.g. x >= 0 which is always true if x is uint. (see )
Boolean constant: Use of Boolean constants (true/false) in code (e.g. conditionals) is indicative of flawed logic. (see )
Boolean equality: Boolean variables can be checked within conditionals directly without the use of equality operators to true/false. (see )
State-modifying functions: Functions that modify state (in assembly or otherwise) but are labelled constant/pure/view revert in solc >=0.5.0 (work in prior versions) because of the use of STATICCALL opcode. (see )
Return values of low-level calls: Ensure that return values of low-level calls (call/callcode/delegatecall/send/etc.) are checked to avoid unexpected failures. (see )
Account existence check for low-level calls: Low-level calls call/delegatecall/staticcall return true even if the account called is non-existent (per EVM design). Account existence must be checked prior to calling if needed. (see )
Dangerous shadowing: Local variables, state variables, functions, modifiers, or events with names that shadow (i.e. override) builtin Solidity symbols e.g. now or other declarations from the current scope are misleading and may lead to unexpected usages and behavior. (see )
Dangerous state variable shadowing: Shadowing state variables in derived contracts may be dangerous for critical variables such as contract owner (for e.g. where modifiers in base contracts check on base variables but shadowed variables are set mistakenly) and contracts incorrectly use base/shadowed variables. Do not shadow state variables. (see )
Pre-declaration usage of local variables: Usage of a variable before its declaration (either declared later or in another scope) leads to unexpected behavior in solc < 0.5.0 but solc >= 0.5.0 implements C99-style scoping rules where variables can only be used after they have been declared and only in the same or nested scopes. (see )
Costly operations inside a loop: Operations such as state variable updates (use SSTOREs) inside a loop cost a lot of gas, are expensive and may lead to out-of-gas errors. Optimizations using local variables are preferred. (see )
Calls inside a loop: Calls to external contracts inside a loop are dangerous (especially if the loop index can be user-controlled) because it could lead to DoS if one of the calls reverts or execution runs out of gas. Avoid calls within loops, check that loop index cannot be user-controlled or is bounded. (see )
DoS with block gas limit: Programming patterns such as looping over arrays of unknown size may lead to DoS when the gas cost of execution exceeds the block gas limit. (see )
Missing events: Events for critical state changes (e.g. owner and other critical parameters) should be emitted for tracking this off-chain. (see )
Unindexed event parameters: Parameters of certain events are expected to be indexed (e.g. ERC20 Transfer/Approval events) so that they’re included in the block’s bloom filter for faster access. Failure to do so might confuse off-chain tooling looking for such indexed events. (see )
Incorrect event signature in libraries: Contract types used in events in libraries cause an incorrect event signature hash. Instead of using the type address
in the hashed signature, the actual contract name was used, leading to a wrong hash in the logs. This is due to a compiler bug introduced in v0.5.0 and fixed in v0.5.8. (see )
Dangerous unary expressions: Unary expressions such as x =+ 1 are likely errors where the programmer really meant to use x += 1. Unary + operator was deprecated in solc v0.5.0. (see )
Missing zero address validation: Setters of address type parameters should include a zero-address check otherwise contract functionality may become inaccessible or tokens burnt forever. (see )
Critical address change: Changing critical addresses in contracts should be a two-step process where the first transaction (from the old/current address) registers the new address (i.e. grants ownership) and the second transaction (from the new address) replaces the old address with the new one (i.e. claims ownership). This gives an opportunity to recover from incorrect addresses mistakenly used in the first step. If not, contract functionality might become inaccessible. (see and )
assert()/require() state change: Invariants in assert() and require() statements should not modify the state per best practices. (see )
require() vs assert(): require() should be used for checking error conditions on inputs and return values while assert() should be used for invariant checking. Between solc 0.4.10 and 0.8.0, require() used REVERT (0xfd) opcode which refunded remaining gas on failure while assert() used INVALID (0xfe) opcode which consumed all the supplied gas. (see )
Deprecated keywords: Use of deprecated functions/operators such as block.blockhash() for blockhash(), msg.gas for gasleft(), throw for revert(), sha3() for keccak256(), callcode() for delegatecall(), suicide() for selfdestruct(), constant for view or var for actual type name should be avoided to prevent unintended errors with newer compiler versions. (see )
Function default visibility: Functions without a visibility type specifier are public by default in solc < 0.5.0. This can lead to a vulnerability where a malicious user may make unauthorized state changes. solc >= 0.5.0 requires explicit function visibility specifiers. (see )
Incorrect inheritance order: Contracts inheriting from multiple contracts with identical functions should specify the correct inheritance order i.e. more general to more specific to avoid inheriting the incorrect function implementation. (see )
Missing inheritance: A contract might appear (based on name or functions implemented) to inherit from another interface or abstract contract without actually doing so. (see )
Insufficient gas griefing: Transaction relayers need to be trusted to provide enough gas for the transaction to succeed. (see )
Modifying reference type parameters: Structs/Arrays/Mappings passed as arguments to a function may be by value (memory) or reference (storage) as specified by the data location (optional before solc 0.5.0). Ensure correct usage of memory and storage in function parameters and make all data locations explicit. (see )
Arbitrary jump with function type variable: Function type variables should be carefully handled and avoided in assembly manipulations to prevent jumps to arbitrary code locations. (see )
Hash collisions with multiple variable length arguments: Using abi.encodePacked() with multiple variable length arguments can, in certain situations, lead to a hash collision. Do not allow users access to parameters used in abi.encodePacked(), use fixed length arrays or use abi.encode(). (see and )
Malleability risk from dirty high order bits: Types that do not occupy the full 32 bytes might contain “dirty higher order bits” which does not affect operation on types but gives different results with msg.data. (see )
Incorrect shift in assembly: Shift operators (shl(x, y), shr(x, y), sar(x, y)) in Solidity assembly apply the shift operation of x bits on y and not the other way around, which may be confusing. Check if the values in a shift operation are reversed. (see )
Assembly usage: Use of EVM assembly is error-prone and should be avoided or double-checked for correctness. (see )
Right-To-Left-Override control character (U+202E): Malicious actors can use the Right-To-Left-Override unicode character to force RTL text rendering and confuse users as to the real intent of a contract. U+202E character should not appear in the source code of a smart contract. (see )
Constant state variables: Constant state variables should be declared constant to save gas. (see )
Similar variable names: Variables with similar names could be confused for each other and therefore should be avoided. (see )
Uninitialized state/local variables: Uninitialized state/local variables are assigned zero values by the compiler and may cause unintended results e.g. transferring tokens to zero address. Explicitly initialize all state/local variables. (see and )
Uninitialized storage pointers: Uninitialized local storage variables can point to unexpected storage locations in the contract, which can lead to vulnerabilities. Solc 0.5.0 and above disallow such pointers. (see )
Uninitialized function pointers in constructors: Calling uninitialized function pointers in constructors of contracts compiled with solc versions 0.4.5-0.4.25 and 0.5.0-0.5.7 lead to unexpected behavior because of a compiler bug. (see )
Long number literals: Number literals with many digits should be carefully checked as they are prone to error. (see )
Out-of-range enum: Solc < 0.4.5 produced unexpected behavior with out-of-range enums. Check enum conversion or use a newer compiler. (see )
Uncalled public functions: Public functions that are never called from within the contracts should be declared external to save gas. (see )
Dead/Unreachable code: Dead code may be indicative of programmer error, missing logic or potential optimization opportunity, which needs to be flagged for removal or addressed appropriately. (see )
Unused return values: Unused return values of function calls are indicative of programmer errors which may have unexpected behavior. (see )
Unused variables: Unused state/local variables may be indicative of programmer error, missing logic or potential optimization opportunity, which needs to be flagged for removal or addressed appropriately. (see )
Redundant statements: Statements with no effects that do not produce code may be indicative of programmer error or missing logic, which needs to be flagged for removal or addressed appropriately. (see )
Storage array with signed Integers with ABIEncoderV2: Assigning an array of signed integers to a storage array of different type can lead to data corruption in that array. This is due to a compiler bug introduced in v0.4.7 and fixed in v0.5.10. (see )
Dynamic constructor arguments clipped with ABIEncoderV2: A contract's constructor which takes structs or arrays that contain dynamically sized arrays reverts or decodes to invalid data when ABIEncoderV2 is used. This is due to a compiler bug introduced in v0.4.16 and fixed in v0.5.9. (see )
Storage array with multiSlot element with ABIEncoderV2: Storage arrays containing structs or other statically sized arrays are not read properly when directly encoded in external function calls or in abi.encode(). This is due to a compiler bug introduced in v0.4.16 and fixed in v0.5.10. (see )
Calldata structs with statically sized and dynamically encoded members with ABIEncoderV2: Reading from calldata structs that contain dynamically encoded, but statically sized members can result in incorrect values. This is due to a compiler bug introduced in v0.5.6 and fixed in v0.5.11. (see )
Packed storage with ABIEncoderV2: Storage structs and arrays with types shorter than 32 bytes can cause data corruption if encoded directly from storage using ABIEncoderV2. This is due to a compiler bug introduced in v0.5.0 and fixed in v0.5.7. (see )
Incorrect loads with Yul optimizer and ABIEncoderV2: The Yul optimizer incorrectly replaces MLOAD and SLOAD calls with values that have been previously written to the load location. This can only happen if ABIEncoderV2 is activated and the experimental Yul optimizer has been activated manually in addition to the regular optimizer in the compiler settings. This is due to a compiler bug introduced in v0.5.14 and fixed in v0.5.15. (see )
Array slice dynamically encoded base type with ABIEncoderV2: Accessing array slices of arrays with dynamically encoded base types (e.g. multi-dimensional arrays) can result in invalid data being read. This is due to a compiler bug introduced in v0.6.0 and fixed in v0.6.8. (see )
Missing escaping in formatting with ABIEncoderV2: String literals containing double backslash characters passed directly to external or encoding function calls can lead to a different string being used when ABIEncoderV2 is enabled. This is due to a compiler bug introduced in v0.5.14 and fixed in v0.6.8. (see )
Double shift size overflow: Double bitwise shifts by large constants whose sum overflows 256 bits can result in unexpected values. Nested logical shift operations whose total shift size is 2**256 or more are incorrectly optimized. This only applies to shifts by numbers of bits that are compile-time constant expressions. This happens when the optimizer is used and evmVersion >= Constantinople. This is due to a compiler bug introduced in v0.5.5 and fixed in v0.5.6. (see )
Incorrect byte instruction optimization: The optimizer incorrectly handles byte opcodes whose second argument is 31 or a constant expression that evaluates to 31. This can result in unexpected values. This can happen when performing index access on bytesNN types with a compile time constant value (not index) of 31 or when using the byte opcode in inline assembly. This is due to a compiler bug introduced in v0.5.5 and fixed in v0.5.7. (see )
Essential assignments removed with Yul Optimizer : The Yul optimizer can remove essential assignments to variables declared inside for loops when Yul's continue or break statement is used mostly while using inline assembly with for loops and continue and break statements. This is due to a compiler bug introduced in v0.5.8/v0.6.0 and fixed in v0.5.16/v0.6.1. (see )
Private methods overridden: While private methods of base contracts are not visible and cannot be called directly from the derived contract, it is still possible to declare a function of the same name and type and thus change the behaviour of the base contract's function. This is due to a compiler bug introduced in v0.3.0 and fixed in v0.5.17. (see )
Tuple assignment multi stack slot components: Tuple assignments with components that occupy several stack slots, i.e. nested tuples, pointers to external functions or references to dynamically sized calldata arrays, can result in invalid values. This is due to a compiler bug introduced in v0.1.6 and fixed in v0.6.6. (see )
Dynamic array cleanup: When assigning a dynamically sized array with types of size at most 16 bytes in storage causing the assigned array to shrink, some parts of deleted slots were not zeroed out. This is due to a compiler bug fixed in v0.7.3. (see )
Empty byte array copy: Copying an empty byte array (or string) from memory or calldata to storage can result in data corruption if the target array's length is increased subsequently without storing new data. This is due to a compiler bug fixed in v0.7.4. (see )
Memory array creation overflow: The creation of very large memory arrays can result in overlapping memory regions and thus memory corruption. This is due to a compiler bug introduced in v0.2.0 and fixed in v0.6.5. (see )
Calldata using for: Function calls to internal library functions with calldata parameters called via “using for” can result in invalid data being read. This is due to a compiler bug introduced in v0.6.9 and fixed in v0.6.10. (see )
Free function redefinition: The compiler does not flag an error when two or more free functions (functions outside of a contract) with the same name and parameter types are defined in a source unit or when an imported free function alias shadows another free function with a different name but identical parameter types. This is due to a compiler bug introduced in v0.7.1 and fixed in v0.7.2. (see )
Unprotected initializers in proxy-based upgradeable contracts: Proxy-based upgradeable contracts need to use public initializer functions instead of constructors that need to be explicitly called only once. Preventing multiple invocations of such initializer functions (e.g. via initializer modifier from OpenZeppelin’s Initializable library) is a must. (see and )
Initializing state-variables in proxy-based upgradeable contracts: This should be done in initializer functions and not as part of the state variable declarations in which case they won’t be set. (see )
Import upgradeable contracts in proxy-based upgradeable contracts: Contracts imported from proxy-based upgradeable contracts should also be upgradeable where such contracts have been modified to use initializers instead of constructors. (see )
Avoid selfdestruct or delegatecall in proxy-based upgradeable contracts: This will cause the logic contract to be destroyed and all contract instances will end up delegating calls to an address without any code. (see )
State variables in proxy-based upgradeable contracts: The declaration order/layout and type/mutability of state variables in such contracts should be preserved exactly while upgrading to prevent critical storage layout mismatch errors. (see )
Function ID collision between proxy/implementation in proxy-based upgradeable contracts: Malicious proxy contracts may exploit function ID collision to invoke unintended proxy functions instead of delegating to implementation functions. Check for function ID collisions. (see and )
Function shadowing between proxy/contract in proxy-based upgradeable contracts: Shadow functions in proxy contract prevent functions in logic contract from being invoked. (see )
ERC20 transfer and transferFrom: Should return a boolean. Several tokens do not return a boolean on these functions. As a result, their calls in the contract might fail. (see )
ERC20 name, decimals, and symbol functions: Are present if used. These functions are optional in the ERC20 standard and might not be present. (see )
ERC20 decimals returns a uint8: Several tokens incorrectly return a uint256. If this is the case, ensure the value returned is below 255. (see )
ERC20 approve race-condition: The ERC20 standard has a known ERC20 race condition that must be mitigated to prevent attackers from stealing tokens. (see )
ERC777 hooks: ERC777 tokens have the concept of a hook function that is called before any calls to send, transfer, operatorSend, minting and burning. While these hooks enable a lot of interesting use cases, care should be taken to make sure they do not make external calls because that can lead to reentrancies. (see )
Token Deflation via fees: Transfer and transferFrom should not take a fee. Deflationary tokens can lead to unexpected behavior. (see )
Token Inflation via interest: Potential interest earned from the token should be taken into account. Some tokens distribute interest to token holders. This interest might be trapped in the contract if not taken into account. (see )
Token contract avoids unneeded complexity: The token should be a simple contract; a token with complex code requires a higher standard of review. (see )
Token contract has only a few non–token-related functions: Non–token-related functions increase the likelihood of an issue in the contract. (see )
Token only has one address: Tokens with multiple entry points for balance updates can break internal bookkeeping based on the address (e.g. balances[token_address][msg.sender] might not reflect the actual balance). (see )
Token is not upgradeable: Upgradeable contracts might change their rules over time. (see )
Token owner has limited minting capabilities: Malicious or compromised owners can abuse minting capabilities. (see )
Token is not pausable: Malicious or compromised owners can trap contracts relying on pausable tokens. (see )
Token owner cannot blacklist the contract: Malicious or compromised owners can trap contracts relying on tokens with a blacklist. (see )
Token development team is known and can be held responsible for abuse: Contracts with anonymous development teams, or that reside in legal shelters should require a higher standard of review. (see )
No token user owns most of the supply: If a few users own most of the tokens, they can influence operations based on the token's repartition. (see )
Token total supply is sufficient: Tokens with a low total supply can be easily manipulated. (see )
Tokens are located in more than a few exchanges: If all the tokens are in one exchange, a compromise of the exchange can compromise the contract relying on the token. (see )
Token balance and Flash loans: Users understand the associated risks of large funds or flash loans. Contracts relying on the token balance must carefully take in consideration attackers with large funds or attacks through flash loans. (see )
Token does not allow flash minting: Flash minting can lead to substantial swings in the balance and the total supply, which necessitate strict and comprehensive overflow checks in the operation of the token. (see )
ERC1400 permissioned addresses: Can block transfers from/to specific addresses. (see )
ERC1400 forced transfers: Trusted actors have the ability to transfer funds however they choose. (see )
ERC1644 forced transfers: Controller has the ability to steal funds. (see )
ERC621 control of totalSupply: totalSupply can be changed by trusted actors (see )
ERC884 cancel and reissue: Token implementers have the ability to cancel an address and move its tokens to a new address (see )
ERC884 whitelisting: Tokens can only be sent to whitelisted addresses (see )
Guarded launch via asset limits: Limiting the total asset value managed by a system initially upon launch and gradually increasing it over time may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via asset types: Limiting types of assets that can be used in the protocol initially upon launch and gradually expanding to other assets over time may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via user limits: Limiting the total number of users that can interact with a system initially upon launch and gradually increasing it over time may reduce impact due to initial vulnerabilities or exploits. Initial users may also be whitelisted to limit to trusted actors before opening the system to everyone. (see )
Guarded launch via usage limits: Enforcing transaction size limits, daily volume limits, per-account limits, or rate-limiting transactions may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via composability limits: Restricting the composability of the system to interface only with whitelisted trusted contracts before expanding to arbitrary external contracts may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via escrows: Escrowing high value transactions/operations with time locks and a governance capability to nullify or revert transactions may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via circuit breakers: Implementing capabilities to pause/unpause a system in extreme scenarios may reduce impact due to initial vulnerabilities or exploits. (see )
Guarded launch via emergency shutdown: Implement capabilities that allow governance to shutdown new activity in the system and allow users to reclaim assets may reduce impact due to initial vulnerabilities or exploits. (see )
System specification: Ensure that the specification of the entire system is considered, written and evaluated to the greatest detail possible. Specification describes how (and why) the different components of the system behave to achieve the design requirements. Without specification, a system implementation cannot be evaluated against the requirements for correctness.
System documentation: Ensure that roles, functionalities and interactions of the entire system are well documented to the greatest detail possible. Documentation describes what (and how) the implementation of different components of the system does to achieve the specification goals. Without documentation, a system implementation cannot be evaluated against the specification for correctness and one will have to rely on analyzing the implementation itself.
Function parameters: Ensure input validation for all function parameters especially if the visibility is external/public where (untrusted) users can control values. This is especially required for address parameters where maliciously/accidentally used incorrect/zero addresses can cause vulnerabilities or unexpected behavior.
Function arguments: Ensure that the arguments to function calls at the caller sites are the correct ones and in the right order as expected by the function definition.
Function visibility: Ensure that the strictest visibility is used for the required functionality. An accidental external/public visibility will allow (untrusted) users to invoke functionality that is supposed to be restricted internally.
Function modifiers: Ensure that the right set of function modifiers are used (in the correct order) for the specific functions so that the expected access control or validation is correctly enforced.
Function return values: Ensure that the appropriate return value(s) are returned from functions and checked without ignoring at function call sites, so that error conditions are caught and handled appropriately.
Function invocation timeliness: Externally accessible functions (external/public visibility) may be called at any time (or never). It is not safe to assume they will only be called at specific system phases (e.g. after initialization, when unpaused, during liquidation) that is meaningful to the system design. The reason for this can be accidental or malicious. Function implementation should be robust enough to track system state transitions, determine meaningful states for invocations and withstand arbitrary calls. For e.g., initialization functions (used with upgradeable contracts that cannot use constructors) are meant to be called atomically along with contract deployment to prevent anyone else from initializing with arbitrary values.
Function invocation repetitiveness: Externally accessible functions (external/public visibility) may be called any number of times. It is not safe to assume they will only be called only once or a specific number of times that is meaningful to the system design. Function implementation should be robust enough to track, prevent, ignore or account for arbitrarily repetitive invocations. For e.g., initialization functions (used with upgradeable contracts that cannot use constructors) are meant to be called only once.
Function invocation order: Externally accessible functions (external/public visibility) may be called in any order (with respect to other defined functions). It is not safe to assume they will only be called in the specific order that makes sense to the system design or is implicitly assumed in the code. For e.g., initialization functions (used with upgradeable contracts that cannot use constructors) are meant to be called before other system functions can be called.
Function invocation arguments: Externally accessible functions (external/public visibility) may be called with any possible arguments. Without complete and proper validation (e.g. zero address checks, bound checks, threshold checks etc.), they cannot be assumed to comply with any assumptions made about them in the code.
Conditionals: Ensure that in conditional expressions (e.g. if statements), the correct variables are being checked and the correct operators, if any, are being used to combine them. For e.g. using || instead of && is a common error.
Access control specification: Ensure that the various system actors, their access control privileges and trust assumptions are accurately specified in great detail so that they are correctly implemented and enforced across different contracts, functions and system transitions/flows.
Access control implementation: Ensure that the specified access control is implemented uniformly across all the subjects (actors) seeking access and objects (variables, functions) being accessed so that there are no paths/flows where the access control is missing or may be side-stepped.
Missing modifiers: Access control is typically enforced on functions using modifiers that check if specific addresses/roles are calling these functions. Ensure that such modifiers are present on all relevant functions which require that specific access control.
Incorrectly implemented modifiers: Access control is typically enforced on functions using modifiers that check if specific addresses/roles are calling these functions. A system can have multiple roles with different privileges. Ensure that modifiers are implementing the expected checks on the correct roles/addresses with the right composition e.g. incorrect use of || instead of && is a common vulnerability while composing access checks.
Incorrectly used modifiers: Access control is typically enforced on functions using modifiers that check if specific addresses/roles are calling these functions. A system can have multiple roles with different privileges. Ensure that correct modifiers are used on functions requiring specific access control enforced by that modifier.
Access control changes: Ensure that changes to access control (e.g. change of ownership to new addresses) are handled with extra security so that such transitions happen smoothly without contracts getting locked out or compromised due to use of incorrect credentials.
Comments: Ensure that the code is well commented both with NatSpec and inline comments for better readability and maintainability. The comments should accurately reflect what the corresponding code does. Stale comments should be removed. Discrepancies between code and comments should be addressed. Any TODO’s indicated by comments should be addressed. Commented code should be removed.
Tests: Tests indicate that the system implementation has been validated against the specification. Unit tests, functional tests and integration tests should have been performed to achieve good test coverage across the entire codebase. Any code or parameterisation used specifically for testing should be removed from production code.
Unused constructs: Any unused imports, inherited contracts, functions, parameters, variables, modifiers, events or return values should be removed (or used appropriately) after careful evaluation. This will not only reduce gas costs but improve readability and maintainability of the code.
Redundant constructs: Redundant code and comments can be confusing and should be removed (or changed appropriately) after careful evaluation. This will not only reduce gas costs but improve readability and maintainability of the code.
ETH Handling: Contracts that accept/manage/transfer ETH should ensure that functions handling ETH are using msg.value appropriately, logic that depends on ETH value accounts for less/more ETH sent, logic that depends on contract ETH balance accounts for the different direct/indirect (e.g. coinbase transaction, selfdestruct recipient) ways of receiving ETH and transfers are reentrancy safe. Functions handling ETH should be checked extra carefully for access control, input validation and error handling.
Token Handling: Contracts that accept/manage/transfer ERC tokens should ensure that functions handling tokens account for different types of ERC tokens (e.g. ERC20 vs ERC777), deflationary/inflationary tokens, rebasing tokens and trusted/external tokens. Functions handling tokens should be checked extra carefully for access control, input validation and error handling.
Trusted actors: Ideally there should be no trusted actors while interacting with smart contracts. However, in guarded launch scenarios, the goal is to start with trusted actors and then progressively decentralise towards automated governance by community/DAO. For the trusted phase, all the trusted actors, their roles and capabilities should be clearly specified, implemented accordingly and documented for user information and examination.
Privileged roles and EOAs: Trusted actors who have privileged roles with capabilities to deploy contracts, change critical parameters, pause/unpause system, trigger emergency shutdown, withdraw/transfer/drain funds and allow/deny other actors should be addresses controlled by multiple, independent, mutually distrusting entities. They should not be controlled by private keys of EOAs but with Multisigs with a high threshold (e.g. 5-of-7, 9-of-11) and eventually by a DAO of token holders. EOA has a single point of failure.
Two-step change of privileged roles: When privileged roles are being changed, it is recommended to follow a two-step approach: 1) The current privileged role proposes a new address for the change 2) The newly proposed address then claims the privileged role in a separate transaction. This two-step change allows accidental proposals to be corrected instead of leaving the system operationally with no/malicious privileged role. For e.g., in a single-step change, if the current admin accidentally changes the new admin to a zero-address or an incorrect address (where the private keys are not available), the system is left without an operational admin and will have to be redeployed.
Time-delayed change of critical parameters: When critical parameters of systems need to be changed, it is required to broadcast the change via event emission and recommended to enforce the changes after a time-delay. This is to allow system users to be aware of such critical changes and give them an opportunity to exit or adjust their engagement with the system accordingly. For e.g. reducing the rewards or increasing the fees in a system might not be acceptable to some users who may wish to withdraw their funds and exit.
Explicit over Implicit: While Solidity has progressively adopted explicit declarations of intent for e.g. with function visibility and variable storage, it is recommended to do the same at the application level where all requirements should be explicitly validated (e.g. input parameters) and assumptions should be documented and checked. Implicit requirements and assumptions should be flagged as dangerous.
Configuration issues: Misconfiguration of system components such contracts, parameters, addresses and permissions may lead to security issues.
Initialization issues: Lack of initialization, initializing with incorrect values or allowing untrusted actors to initialize system parameters may lead to security issues.
Cleanup issues: Missing to clean up old state or cleaning up incorrectly/insufficiently will lead to reuse of stale state which may lead to security issues.
Data processing issues: Processing data incorrectly will cause unexpected behavior which may lead to security issues.
Data validation issues: Missing validation of data or incorrectly/insufficiently validating data, especially tainted data from untrusted users, will cause untrustworthy system behavior which may lead to security issues.
Numerical issues: Incorrect numerical computation will cause unexpected behavior which may lead to security issues.
Accounting issues: Incorrect or insufficient tracking or accounting of business logic related aspects such as states, phases, permissions, roles, funds (deposits/withdrawals) and tokens (mints/burns/transfers) may lead to security issues.
Access control issues: Incorrect or insufficient access control or authorization related to system actors, roles, assets and permissions may lead to security issues.
Auditing/logging issues: Incorrect or insufficient emission of events will impact off-chain monitoring and incident response capabilities which may lead to security issues.
Cryptography issues: Incorrect or insufficient cryptography especially related to on-chain signature verification or off-chain key management will impact access control and may lead to security issues.
Error-reporting issues: Incorrect or insufficient detecting, reporting and handling of error conditions will cause exceptional behavior to go unnoticed which may lead to security issues.
Denial-of-Service (DoS) issues: Preventing other users from successfully accessing system services by modifying system parameters or state causes denial-of-service issues which affects the availability of the system. This may also manifest as security issues if users are not able to access their funds locked in the system.
Timing issues: Incorrect assumptions on timing of user actions, system state transitions or blockchain state/blocks/transactions may lead to security issues.
Ordering issues: Incorrect assumptions on ordering of user actions or system state transitions may lead to security issues. For e.g., a user may accidentally/maliciously call a finalization function even before the initialization function if the system allows.
Undefined behavior issues: Any behavior that is undefined in the specification but is allowed in the implementation will result in unexpected outcomes which may lead to security issues.
External interaction issues: Interacting with external components (e.g. tokens, contracts, oracles) forces system to trust or make assumptions on their correctness/availability requiring validation of their existence and outputs without which may lead to security issues.
Trust issues: Incorrect or Insufficient trust assumption about/among system actors and external entities will lead to privilege escalation/abuse which may lead to security issues.
Gas issues: Incorrect assumptions about gas requirements especially for loops or external calls will lead to out-of-gas exceptions which may lead to security issues such as failed transfers or locked funds.
Dependency issues: Dependencies on external actors or software (imports, contracts, libraries, tokens etc.) will lead to trust/availability/correctness assumptions which if/when broken may lead to security issues.
Constant issues: Incorrect assumptions about system actors, entities or parameters being constant may lead to security issues if/when such factors change unexpectedly.
Freshness issues: Incorrect assumptions about the status of or data from system actors or entities being fresh (i.e. not stale), because of lack of updation or availability, may lead to security issues if/when such factors have been updated. For e.g., getting a stale price from an Oracle.
Scarcity issues: Incorrect assumptions about the scarcity of tokens/funds available to any system actor will lead to unexpected outcomes which may lead to security issues. For e.g., flash loans.
Incentive issues: Incorrect assumptions about the incentives of system/external actors to perform or not perform certain actions will lead to unexpected behavior being triggered or expected behavior not being triggered, both of which may lead to security issues. For e.g., incentive to liquidate positions, lack of incentive to DoS or grief system.
Clarity issues: Lack of clarity in system specification, documentation, implementation or UI/UX will lead to incorrect expectations/outcome which may lead to security issues.
Privacy issues: Data and transactions on the Ethereum blockchain are not private. Anyone can observe contract state and track transactions (both included in a block and pending in the mempool). Incorrect assumptions about privacy aspects of data or transactions can be abused which may lead to security issues.
Cloning issues: Copy-pasting code from other libraries, contracts or even different parts of the same contract may result in incorrect code semantics for the context being copied to, copy over any vulnerabilities or miss any security fixes applied to the original code. All these may lead to security issues.
Business logic issues: Incorrect or insufficient assumptions about the higher-order business logic being implemented in the application will lead to differences in expected and actual behavior, which may result in security issues.
Principle of Least Privilege: “Every program and every user of the system should operate using the least set of privileges necessary to complete the job” — Ensure that various system actors have the least amount of privilege granted as required by their roles to execute their specified tasks. Granting excess privilege is prone to misuse/abuse when trusted actors misbehave or their access is hijacked by malicious entities. (See )
Principle of Separation of Privilege: “Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key” — Ensure that critical privileges are separated across multiple actors so that there are no single points of failure/abuse. A good example of this is to require a multisig address (not EOA) for privileged actors (e.g. owner, admin, governor, deployer) who control key contract functionality such as pause/unpause/shutdown, emergency fund drain, upgradeability, allow/deny list and critical parameters. The multisig address should be composed of entities that are different and mutually distrusting/verifying. (See )
Principle of Least Common Mechanism: “Minimize the amount of mechanism common to more than one user and depended on by all users” — Ensure that only the least number of security-critical modules/paths as required are shared amongst the different actors/code so that impact from any vulnerability/compromise in shared components is limited and contained to the smallest possible subset. (See )
Principle of Fail-safe Defaults: “Base access decisions on permission rather than exclusion” — Ensure that variables or permissions are initialized to fail-safe default values which can be made more inclusive later instead of opening up the system to everyone including untrusted actors. (See )
Principle of Complete Mediation: “Every access to every object must be checked for authority.” — Ensure that any required access control is enforced along all access paths to the object or function being protected. (See )
Principle of Economy of Mechanism: “Keep the design as simple and small as possible” — Ensure that contracts and functions are not overly complex or large so as to reduce readability or maintainability. Complexity typically leads to insecurity. (See )
Principle of Open Design: “The design should not be secret” — Smart contracts are expected to be open-sourced and accessible to everyone. Security by obscurity of code or underlying algorithms is not an option. Security should be derived from the strength of the design and implementation under the assumption that (byzantine) attackers will study their details and try to exploit them in arbitrary ways. (See )
Principle of Psychological Acceptability: “It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly” — Ensure that security aspects of smart contract interfaces and system designs/flows are user-friendly and intuitive so that users can interact with minimal risk. (See )
Principle of Work Factor: “Compare the cost of circumventing the mechanism with the resources of a potential attacker” — Given the magnitude of value managed by smart contracts, it is safe to assume that byzantine attackers will risk the greatest amounts of intellectual/financial/social capital possible to subvert such systems. Therefore, the mitigation mechanisms must factor in the highest levels of risk. (See )
Principle of Compromise Recording: “Mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss” — Ensure that smart contracts and their accompanying operational infrastructure can be monitored/analyzed at all times (development/deployment/runtime) for minimizing loss from any compromise due to vulnerabilities/exploits. For e.g., critical operations in contracts should necessarily emit events to facilitate monitoring at runtime. (See )