Business rules can be found throughout regulatory reporting programmes, with everything from proportionality to supervisory process defined in a single rulebook. Automation of these rules allows regulators to reduce the cost of regulation and free up specialists from mundane tasks. This makes business rule automation an attractive area for investment.

Rules that relate to data quality are invariably the first that are addressed, and for good reason too! Data quality rules increase user confidence in the data. Once confident in the data, then automation of other parts of the business process becomes easier to consider and deliver.

In the rest of this article, we look at different types of data quality rules and what can be expected from them.  Of course, CoreFiling can deliver all of these using open standard, vendor neutral, ready to go XBRL taxonomies.

Rule 1: Data should be complete

The completeness rule is automated by ensuring certain data items are present in a report. Commonly, reports that do not contain basic information such as the name of the company are rejected by the data collector. A more sophisticated usage is to mark as mandatory any item of data that you want to compare across all reports.

Automating this rule delivers full data sets which are significantly more valuable than ones where key facts are missing, since filling in or working around such gaps is expensive and time-consuming. In practice, there are often valid reasons for not reporting almost any piece of data in a report, making these rules a rather blunt tool for data quality.

Rule 2: Data should be correct

Automating data correctness rules can minimise data cleansing and maximise analysts’ confidence by blocking clearly bad data from being submitted. This is an area where XBRL already excels without additional rules, for example, no one collecting XBRL data has to deal with bad data-types (e.g. text being submitted where numbers were required) or incorrect breakdowns.

Basic data-type rules can be enhanced by applying a business understanding to the data that is being collected. Pattern checks can be used to check that identifiers are in the correct format and range checks can be applied to numbers so that, for example, revenue is always positive.

Nothing knocks an analyst’s confidence more than seeing daft values in data, particularly as these will often show as “interesting” before they have to be ignored. While there may be a company that correctly reports their revenue as negative, finding a way to allow that one filing to bypass the rule is likely much less effort than omitting correctness rules entirely.

Rule 3: Data should be consistent across reports

Consistency rules ensure that data from different reports can be put together for analysis. Without these checks, the data may simply not add up or there will be jumps and gaps to be cleaned up. There are three primary scopes of consistency in order to ensure fully consistent data:

  • Single report rules ensure that data is consistent within a report. When these are automated, totals equal the sum of their breakdowns, ratios are correctly calculated and balanced calculations actually balance.
  • Cross-report rules ensure that equivalent data from different submissions within the same period is consistent. There is usually only minimal overlap in reported figures, however, time is saved because post-processing such as normalising the names of companies and persons is not necessary when these checks are in place.
  • Cross-period rules ensure that data from different periods can be joined up to create a consistent view of the data over time. Without these, the data may have unexplained deltas which are difficult to reconcile when the data is needed.

It is worth noting that when cross-report or cross-period inconsistencies are flagged by rules, it is not necessarily the most recent submission that is wrong. In order to achieve the desired level of consistency, data collection portals should allow for correction of previous submissions as well.

Rule 4: Data should be consistent with other sources

Of course, a single data collection programme does not exist in a vacuum and the data must sit alongside organisational master data or that contained in other systems. Significant costs can be avoided by automating rules that enforce consistency with this external data.

Rules that either call out to external data sources or import static data to be checked against can make the data much more compatible and usable across the business.

Do you automate these rules?

In many cases, data quality rules are effective at improving the value of collected data. They bring about confidence in the data which opens the door for automation of other rule-based areas. All of these rules can be implemented using the CoreFiling Taxonomy Management System (TMS) and applied in XBRL portals such as CoreFiling’s True North.

If you would like to see how to automate all types of data quality rules or are ready for a new round of automation, CoreFiling are currently running hands-on taxonomy workshops. In these, you can use the cutting edge in taxonomy technology to write data quality rules under the guidance of our expert consultants.

For more information and to book a workshop please contact us.