To view the full-size Address and Data Quality Landscape, click here.
Based on some selected criteria you can sort according to your requirements. The selection of criteria varies from time to time.
All information in the table has been verified by the providers. To get more information about the providers, continue reading below or click on the provider in the table..
|Anbieter Data Quality Tool||Dublettenerkennung||Flexible Justierung der Match- bzw. Abgleichsregeln möglich||Vorschlag zur Fusionierung durch das System||Regeln zur Zusammenführung der Dubletten||Füllgrad-Analysen||Qualitäts-Analysen||Anreicherung durch externe Daten||Reporting zu den Analysen||Integration in eine CRM-Systemlandschaft möglich||Integration in eine E-Commerce-Landschaft möglich|
|Deutsche Post Adress|
|Deutsche Post Direkt|
|AS Address Solutions|
|relate||via Schnittstellen||via Schnittstellen|
|loqate||via PlugIn oder Schnittstellen||via PlugIn oder Schnittstellen|
You can find more about Address Solutions here.
Find out more about Melissa Data here.
You can find more about loqate here.
Founded in 2012, eXotargets has become a major information aggregator in Germany. As a data provider, eXotargets offers a variety of data points for the areas of Datacare (address cleansing, data maintenance, supplementary relocation addresses, deceased register, building directory / street file / geocoordinates ), Dataselect (lead generation, re-targeting) and eIDV (KYC, Electronic Identity Verification).
Intro, why is the topic relevant, why is the topic strategically so important?
For all companies seeking direct contact with their customers, the customer database is the linchpin of coordinated sales and marketing activities. The conviction that quality – in particular the simple correctness of address data – plays an essential role in this process is finally gaining acceptance.
Customer Relationship Management, Data Driven Marketing and Sales, Value to the Customer or Database Marketing demands uncompromising quality and timeliness. The usual buzzwords include address validation, data cleaning, data cleansing, data quality tools, data analysis, optimal data collection by your own employees. But also the optimal support for self-service data entry by prospects and customers. This is the only way to ensure that e-mail addresses for e-mailings and e-mail marketing are captured correctly from the outset.
In July 2020 we created and published the first German address and data quality software and service provider Landscape. The wide variety of technology, tools and software solutions does not make the choice of products and/or service providers any easier. We can help you with this.
Here, in this document you will learn the most important steps for pragmatic data management, how to improve the quality of your customer addresses step by step.
Best Practices Examples – How does the analysis usually start?
At some point, the CEO receives a letter in which the last or first name is misspelled. Then the “inner” question automatically appears:
How does it look like in my own company and
In the next executive meeting you will then ask in turn: Is it IT/IT, marketing, sales, customer service or database marketing that are responsible?
If you can find a person responsible for data quality management at all, the next question is: “Is our database or are our addresses okay? What are we doing to keep it that way? Are there faulty data? Who looks after them, who corrects them? And what about the other, many data?”
At this point, the person being addressed is often accompanied by politically colored sentences, such as, “Don’t worry! No mailings with “Undeliverable” were returned in the last action.” (However, there was no advance directive printed as information for the letter carrier, so no mailings can come back. An advance directive is the text above the address field, “If moved, please forward and return to us with address correction card”). A advance disposition is a premium address service provided by Deutsche Post. In the end, people try often enough to leave the impression that everything is in order.
The recipient of the letter or message does not like to read his name misspelled. A correct and sensibly used personalization in the letter or e-newsletter leads to an increase in the response rate. All analyses are severely impaired by poor address quality and thus the basis for decision-making. Incorrect addresses lead to increased mail returns, unnecessary waste of budgets and lost revenue. Duplicate addresses or cover letters frustrate the recipients (“Man, must they have money”).
If, for example, mother and daughter receive letters or catalogs at the same time, but with different offers, this leads to a loss of sales, since they naturally always pick the cheaper offer.
Only with standardized, cleansed and up-to-date addresses can external data be added, which lead to further segmentation or qualification (for example microgeographic or lifestyle data).
Data quality management: definition of address and data quality
data is information. Important information. Data is the new oil! This guiding principle has become increasingly accepted in recent years.
data, which of course also includes addresses, is the basis for good dialog marketing, for targeted sales, perfect service, customized products, sophisticated reporting and detailed analyses, determination of key figures … and much more. From the company’s point of view, it’s all about individualization and personalization. The prospective customer or customer wants a “felt closeness, he wants to be understood …”
The more valid these data are, the better these measures work, the better the prospective customer or customer feels in good hands. For most companies, it is all about improving the status quo. High data quality is therefore the long-term goal.
In addition to the data protection components, which we will not go into here, but refer to, companies should also consider the aspect of motivation:
What does mindfulness mean in relation to this topic? What motivates employees to achieve good address and data quality? This is ultimately a management task and a question of attitude and mindset.
All efforts of Data Quality Management (data quality management) have one goal:
to achieve and maintain the best quality of existing data in an efficient way.
And we don’t stop at a one-time maintenance, but try to keep this data always up-to-date with everything that is available to a company or with the help of a prospect and customer.
“Information quality is the measure for the fulfillment of the “entirety of the requirements for an information or an information product, which refer to its suitability for the fulfilment of given information needs”. 1] Statements about the quality of an information refer for example to it, how exactly this ‘describes’ the reality or how reliable it is, to what extent it is usable thus as basis for a planning of own acting.
The term data quality (as a measure of data quality) is very close to ‘information quality’. Since the basis for information is ‘data’, the ‘data quality’ affects the quality of the information that is extracted from the corresponding data: No “good” information from bad data.”
Somewhat further down in this Wikipedia article is still being written:
“Quality criteria for data quality differ from those for information quality; criteria for data quality are according to:
“Here the focus is on a single company. Data governance here is a data management concept in terms of the ability of an organization to ensure that high data quality is maintained throughout the data lifecycle and that data controls are implemented to support business objectives.
The key focus areas of data governance include availability, usability, consistency, data integrity and data security. This includes establishing processes that ensure effective data management across the enterprise, such as accountability for the adverse effects of poor data quality and ensuring that the data an organization holds can be used by the entire organization.
A data steward is a role that ensures that data governance processes are followed and policies are enforced, and also makes recommendations for improvements to data governance processes. Translated with www.DeepL.com/Translator (free version)
This section could also be part of section 6 “Quality of adresses and data is a task of leadership”. Because it is about responsibility. Only below is it about management responsibility. A data steward or similar corresponds partly to a data protection officer and partly to an operational manager.
We will be taking a more detailed position on this in the coming weeks.
Data Quality: Is there a difference between addresses and data
There is not much difference, but we will add a few notes about it.
The address quality refers to the data belonging to the address. These are usually variables, such as salutation, title, first name, last name, street and house number or the P.O. box, postal code and city. Whereby the postal code can be differentiated again according to postal code street and postal code post office box. This also includes variables such as the e-mail address, telephone number, smartphone number, fax number, etc. Because address quality is all about the delivery of the message. Regardless of which communication channel is used to deliver the message.
All other data that does NOT directly belong to the address is considered separately under the name of data quality. This is not entirely free of overlaps and thus contradictions. Nevertheless, we summarize the topic of data quality management at this point once again as follows.
criteria that cannot be directly assigned to a prospect or customer but are used in the context of transactions. Terms such as Product Information System (PIM), Master Data Management (MDM) or similar terms are used for this purpose.
How do you determine address quality within data quality management activities? We have developed a simple method for this. What are the most important measures? Please carry out the following simple checks:
step one – visual inspection
You transfer all existing addresses (customers, prospective customers, raffles, customer service inquiries, etc.) from a contiguous postal code area (preferably one in which you are personally well versed) into an Excel file. A number of e.g. about 5,000 addresses is already sufficient. Before the check begins, insert one or more columns in which comments can be entered for each address.
Then sort the addresses according to the various criteria and, for example, take a closer look at the first 1,000 and the fourth 1,000 addresses on a random basis.
Each of these 1,000 address packets is now examined as follows:
First you sort the addresses by surname and first name independent of the postal code. Take a look at the spelling of the surnames and first names and you will quickly see in which different spellings unique names and first names have been entered: wrong upper/lower case. The first name is in the last name field or vice versa. The company name is in the name field. The company form is missing.
Then check if the salutation matches the first name. Also the title is regularly entered incorrectly in address fields. One time it is next to the first name, the other time it is in its own field, then “Dr.” is written next to “Doctor” and “Prof.” next to “Professor” and so on. Now sort the addresses by zip code, street, name and first name.
Quickly determine whether person duplicates are contained in the file or whether several family members are entered under the same address. Are these now grandma, mother, daughter? Or is that a coincidence? In the last step, check whether all postal codes have five digits. Is the leading “zero” missing in the East German addresses (which unfortunately often creeps in when exporting to Excel)? Have foreign addresses possibly crept in? Are they marked accordingly? Now count the number of addresses with errors within the packages. If the error rate is higher than two to three percent, you should immediately take the following steps.
Little excursion into B2B on the topic of data quality management:
Very often the address models of ERP systems, e-commerce or other systems have two or three fields that are intended for the company name. This usually leads to a huge problem: The first part of a company is entered in the first field, the addition to the company signature in the second field, the rest as well as the legal form can then be found in the third field.
If such a company name is created, the following could happen: When creating a new company name, an employee checks whether this company already exists. However, he/she enters the company name in a different way and the duplicate check program will not find it. The supposedly new company is created a second time.
Or the interest/customer himself writes himself a little differently than the name that was previously in the system. In connection with e-commerce, this company would also be created a second time, because here too the duplicate check program usually does not recognize the duplicate. If an employee creates this designation, it is often argued afterwards that “the customer wanted it that way”, so I created it the same way.
intermediate conclusion on B2B data quality management:
Exactly at this simple point there is often a lot of crap in the databases, especially in B2B. In our projects we have often found between 6 and 10 different spellings as duplicates. And this can be avoided through training and rules.
step two – address audit
Many address service providers offer a cost-effective address audit. Your addresses are compared with different reference data. As a result, you receive an assessment of how good the entire data is. After this check, you will be able to control the necessary qualification measures in a more targeted manner. You thus rule out the much too expensive watering can principle “everything for everyone”.
Tip from the practice: Do not send a file with all addresses for checking. Divide your data into meaningful groups and have them checked separately for quality. For a duplicate check, of course, all addresses must be checked at once.
step three – data audit
Here the contents of the variables are analyzed with univariate or simple statistical methods and “anomalies”, incorrect entries or unnecessary values are shown. More about this – further down in the section Basic Data Quality Know-how.
Step four – Summary of audits and visual inspection
From the three check steps a summary is created for the management. The detailed analysis includes the identification of weaknesses and good performance. For the weak points, there are recommendations a) for one-time cleanup and b) for ongoing optimization and control.
Ideas for key performance indicators (KPIs), special management tasks, process optimization or IT support round out the picture.
This is the basis for further action, acting and controlling.
Step five – decision “do it yourself” or “let it do”
Before the whole procedure of the cleanup can be started, the question arises: Do it yourself or have it done by the service provider?
For “Do it yourself” the rule clearly speaks: “Addresses belong in the core competence of every company that does CRM and dialog marketing”. Only with smaller address lists or in the initial phase can it be faster and easier with a service provider.
In the medium term, you should always edit the addresses in the company. Addresses are the capital of every company. A service provider (unless he is a proven specialist for this industry) cannot represent the individuality of a company. This also goes along with the training of the employees. Rules are created, how addresses are to be recorded in the future or how the data qualification is carried out.
► Practical tip: International companies should also have the topic of address quality dealt with in the respective country. The head office often has too little knowledge about regional peculiarities and general conditions.
Step six – the one-time or initial cleanup
Normalization or standardization: You prepare the adress-data in such a way, that all information, which can be processed, is written into the corresponding fields. Then you check and correct the salutation using a first name table and the correct form of address. These tables are available from various providers, also for many Western and Eastern European countries.
Postalic cleanup: With the tables from the post office you can standardize the spelling of the street, the city name and possibly the postal code. For addresses that have not been validated for a longer period of time (six to twelve months), a relocation check is recommended. You can use it to change to the new address accordingly. With a comparison of the data of deceased or insolvent persons and companies, you can clean up your addresses in a further step.
Completion: With the correct address a completion or correction of company names is now possible.
doublet cleanup: After you have made all necessary or possible corrections and enhancements, the duplicate matching is useful. You must perform the check for person and family duplicates (Business-to-Consumer) as well as company and contact person (Business-to-Business).
Manual correction: The last step is now the manual corrections. This is certainly time-consuming, but depending on the customer’s value, it is mandatory. Unfortunately, the software does not recognize all errors and therefore cannot correct or clean them automatically. These “insecure duplicates or spellings” are now reviewed record by record by your address quality experts, possibly a search on Google, the imprint or at the residents’ registration office is carried out and then either confirmed with “correct” or corrected accordingly.
External enrichment: Only now can you enrich your addresses with telephone numbers, industry or micro-geographic or lifestyle data.
Create shortcuts: Furthermore, you should create shortcuts of several persons from one family or company. In addition, it is recommended to create group connections or to link parent and subsidiary companies for company addresses.
Here again the complete procedure in an overview:
Fig. 7.24 Example for address and data quality cycle (Source: bdl, 2014)
► Tip from practice: Records that have been checked against each other should be marked that the same records will not be processed again next time. Then only the added unsafe problem cases should be checked again.
Now comes the endurance run: ongoing cleanup or sustainable data quality management
All the above-mentioned test steps of the initial or one-time cleanup must of course be performed regularly and repeatedly within the running processes. In companies where a large number of people involved touch the addresses and possibly correct them, ongoing quality management is necessary. The same is true if there are webshops or other Internet sources of data collection (newsletters, etc.) where customers register themselves.
In addition, the customer should be asked for possible changes at each contact, at regular intervals, but at least once a year, or he will receive a letter or e-mail with a personalized country page and the request to please correct the incomplete address. A response incentive for more attention or response is recommended.
In principle, it is a matter of avoiding a) typical errors, b) insufficient data quality, c) unnecessary costs and thus achieving high quality.
Concluding remark on the topic of operative Data Quality Management (DQM) and better address quality:
An initial cleanup can take between three and nine months, depending on the amount of addresses and the quality/condition of the necessary addresses. Of course, the costs vary greatly. This depends, for example, on the software used and on how much manual post-processing is required and how often the addresses have to be checked during ongoing business. Companies that send a mailing to the majority of their customers every month have different processes than companies that send only four mailings a year to a selected target group. It is important to provide a sufficiently large budget for initial external support, software, validation and manual maintenance.
Don’t be surprised if your bank checks this aspect “How good are your addresses?” the next time you ask for credit.
Perfect address management is the necessary basis for your future success and therefore one of the most important tasks in every company – regardless of whether you are dealing with 500 or five million addresses. These targeted cleansing and quality measures have usually paid off after six – at the latest after twelve months.
As briefly described above, univariate or simple statistical methods are used here to analyze the contents of the variables and show “anomalies”, incorrect entries or unnecessary values.
In the first part of the audit we define the 20 or 30 most important variables that are particularly important for the company. It usually does not make sense to look at all the variables that exist in the customer data. That would be a Herculean task.
What are important data that can be directly assigned to a customer: In the case of persons, this is e.g. the age or date of birth, nobility or scientific title, profession, position, form of address, gender, segment codes and many more.
For companies, this would be e.g. the legal form, country code, codes for state, language, currency, customer and segment codes, which sales area is assigned to the company, which employee of the own company is assigned to this company or the new contact person, is it a key account customer, etc.
Is the Homepage field filled? Can this be deduced from the e-mail address? Are the branch codes all correctly maintained? Is there a reference to the sister, subsidiary or parent company? and many more possible.
Now you put these variables into a flatfile and run standard analysis on it.
This is on the one hand a counting according to “Frequency of values”. In alphanumeric fields you can now find the most wonderful ideas how to write actually identical field contents in the most different ways. Numeric fields often contain values, which normally should not be written here.
For numeric field types, a mean value calculation makes sense. In this case, outlier values can be recognized quickly. If there are incorrect programming or wrong selection list contents or data transfer from the test system which influence the quality accordingly.
The nice thing about this analysis is: You can see where the rules of capture are not followed. You can also see which processes are not yet round.
Our motto: Show me your data and I’ll tell you who you are and how well you measure!
Yes, this is a very important task for the management. It is not only about IT systems. If employees do not know why and for what they are doing it or if there are new incentives to maintain data in the system, it will not work. Only if the employees know, and then exercise care, will you be successful with your company.
As already mentioned, it is first of all about someone taking responsibility for the topic in the company. Someone who shows himself to be a leader and caretaker on this topic, someone who constantly ensures that the quality remains high.
On the other hand, lasting quality can only be controlled by KPIs. These are thus also integrated in target agreements. In this way, the management and leadership team can see whether the company is heading in the right direction, whether quality is gradually improving and whether reporting is therefore automatically improved.
An explanation and motivation of why data and address quality is so important is of course also part of it. Why should a sales force employee enter the data treasures stored in his head into a CRM. What does he get out of it? What do others get out of it? Why is he able to control himself better through this? How can his work be made easier by automating tasks based on good data? Which tasks should be done together for an integrated data quality management system?
This topic of data quality management is also a management task because …
Address and data quality is not a cost factor but a value-added factor!
Data maintenance and data quality must become a corporate culture. It requires an entrepreneurial attitude of every employee that certain selected data is extremely important for the company. No ifs, ands or buts.
key figures are for example “number of addresses completely filled”, “date of last confirmation”, “date of last correction”, “number of mail returns”, “number of addresses currently not advertisable” etc. All key figures are – arranged according to segments – interesting, since the good customers are rather frequently contacted and thus a regular confirmation or correction takes place. For less good customers (low customer value), this means that a different effort has to be made. (See also “Evaluation of addresses in the subchapter “CRM Cockpit”)
We define a few simple, important KPIs or key figures to get you started.
Now you can extend or refine the KPIs for each country, region or target group. But first and foremost, the youngest prospects, the active customers as well as the recently passive customers (who have not ordered for some time) need to be cleaned up and kept at a high level.
The company can make these KPIs available to users for monitoring in reporting, in a Business Intelligence (BI) application.
Data Quality – Outlook:
data are the new oil. You hear this sentence more and more often. But first, before the oil well can be tapped, a high quality can be achieved or AI analyses can be performed, the basis – the database – must be created. Many people talk about Big Data. Yes, the mass of data is increasing all the time. But Big Data is not the problem at the beginning.
The actuality of the most important data is the challenge!
First, when the company dedicates itself to the many possible external enrichments and a lot of own data (in the webshop, logfile of the website, social media …), then Big Data is called for. But even here, one should not look at the (data) mountain as a whole and try to climb it once.
The company should focus on the data that is likely to bring the highest added value. “A lot helps a lot”, has unfortunately always been and still is the worst advisor on the way to higher address quality.
Where the most important data is located is basically irrelevant. The fewer sources the better. We know from the Blissfully study (Blissfully study) that unfortunately there are far too many Cloud or Saas applications. The data is widely scattered or hidden in silos, hardly anyone has an overview.
Other in CRM, ERP, e-commerce or on a Data Management Platform (DMP) – on-premise or in the cloud. The main thing is to have easy access or to be able to simply restore the extracted and cleaned data. The data should be refined. The employees should be involved in the refinement through targets.
The fashionable topic of digital transformation also only works if the data quality is sustainable and good (perfect quality). For the transformation you need reliability, should avoid bad data or bad data quality, continuously optimize the most important data.
These tasks often take 6-9 months, but they are worth it. After a short time, optimal data quality exists and a ROI for this investment is guaranteed and quickly achieved. We have proven this in each of the projects so far. And on the other hand, you save budget on mailings, reporting and decisions become better, you avoid costs and have significantly more chances of generating revenue.
data and address quality is a critical success factor. Accuracy is a must. If only because of the DSGVO. Today this is already the case. And in the future more than ever. The DSGVO alone obliges every company to be absolutely up-to-date.
As already written, automation in marketing, sales and service or AI analysis and prognoses only work if the data is clean. And ultimately, management is responsible for assigning tasks, taking over operational tasks and controlling them.
Further information, further literature, who else writes about this exciting topic besides us:
Note: This is a machine translation. It is neither 100% complete nor 100% correct. We can therefore not guarantee the result.