Debugging Best Practices

Rule debugging

Out of the box, typically, you debug a rule during runtime via console logging by using the console.log facility. To learn more, read console.log() in MDN Web Docs. There is no interactive debugging of a rule available within the Auth0 platform (though one could employ the testing automation technique described below in conjunction with some external interactive source-debugging facility; to learn more, read Rules Testing Best Practices).

Add line comments

Adding sufficient line (i.e., //) or block (i.e., /* */) comments to a rule, particularly around non-obvious functionality, is invaluable to both code debugging and also code understanding, particularly as there are many occasions where the initial implementer of a rule may not be the same person responsible for maintaining it going forward.

Real-time Webtask logging

By default, console log output is unavailable for display during normal execution. However, you can use the Real-time Webtask Logs extension to display all console logs in real-time for all implemented extensibility in an Auth0 tenant, including rules. The real-time console log display provided by the extension includes all console.log output, console.error output, and console.exception output. To learn more, read console.error() in MDN Web Docs.

Enable and disable debug logging

In a production environment, debug logging isn’t something that’s desirable all the time; given the performance considerations associated with rules, it would not be prudent to have it continuously enabled. To learn more, read Performance Best Practices.

However, in a development or testing environment, the option to enable it on a more continuous basis is much more desirable. Further, excessive debug logging could create substantial “noise”, which could make identifying problems that much harder.

Modifying a rule to enable or disable debug logging dependent on the environment would be messy and prone to error. To learn more, read Rules Environment Best Practices. Instead, the environment configuration object can be leveraged to implement conditional processing in a fashion similar to the following:

  function NPClaims(user, context, callback) {
     * This rule ( is used to derive
     * effective claims associated with the Normalized User Profile:
    var DEBUG = configuration.DEBUG ? console.log : function () {};
    DEBUG(LOG_TAG, "identities=", user.identities);
    user.user_metadata = user.user_metadata || {};

    user.family_name =
      user.family_name ||
      user.identities.filter(function(identity) {
        /* Filter out identities which do not have anything synonymous with
         * Family Name
          identity.profileData &&
      }).map(function(identity) {
        return identity.profileData.family_name;
    DEBUG(LOG_TAG, "Computed user.family_name as '", user.family_name, "'");

    return callback(null, user, context);

In the example above, a DEBUG environment configuration variable has been created, which can be set to true or false depending on the execution environment (e.g., production, testing, development). The setting of this variable is used to determine when debug logging is performed. Further, a DEBUGLEVEL environment configuration variable, say, could be created, which could be used to control the debugging log level (e.g., verbose, medium, sparse).

The above example also demonstrates declaration of a named function. For convenience, providing a function name—using some compact and unique naming convention—can assist with diagnostic analysis. Anonymous functions make it hard in debugging situations to interpret the call-stack generated as a result of any exceptional error condition and providing a unique function name addresses this. To learn more, read Error Handling Best Practices.

Static analysis

The rule editor in the Auth0 dashboard provides some rudimentary syntax checking and analysis of rule semantics. However, no provision is made for more complex static code analysis, such as overwrite detection, loop detection, or vulnerability detection. To address this, consider leveraging the use of third-party tooling—such as JSHint, SonarJS, or Coverity—in conjunction with rule testing as part of your deployment automation process. To learn more, read Deployment Best Practices.

Learn more