Data profiling is described in this book as a generic technology. Any specific implementation of software and process to support it will be more or less complete for each step. For example, in value analysis you could invent new analytical techniques endlessly to micro-define what is acceptable. Similarly you can invent rules for business objects seemingly endlessly.
It is easy to get into "analysis paralysis" in performing data profiling by trying to micro-define correctness to the ultimate level and then burn up machines for days trying to validate them. At some point the process yields too little for the effort to be worthwhile. Practitioners need to find the right balance to get the most value from the work being performed.
Although overanalyzing data is a risk, you rarely see this as the case. The most common failing is not to perform enough analysis. Too often the desire to get results quickly ends up driving through the process with too few rules defined and too little thinking about the data.
Used effectively, data profiling can be a core competency technology that will significantly improve data quality assessment findings, shorten the implementation cycles of major projects by months, and improve the understanding of data for end users. It is not the only technology that can be used. However, it is probably the single most effective one for improving the accuracy of data in our corporate databases.