The fundamental notion of business-to-business CRM is usually described as allowing the bigger business to be as responsive to the requirements of its customer as a small business. In the early days of CRM this became translated from “responsive” to “reactive”. Successful larger businesses recognise that they have to be pro-active in finding [listening to] the views, concerns, needs and levels of satisfaction from their customers. Paper-based surveys, such as those left in hotel bedrooms, generally have a low response rate and are usually completed by customers who have a grievance. Telephone-based interviews tend to be influenced by the Cassandra phenomenon. Face-to-face interviews are costly and can be led by the interviewer.
A big, international hotel chain wanted to get more business travellers. They made a decision to conduct a customer satisfaction survey to discover what they needed to enhance their services for this type of guest. A written survey was put into each room and guests were motivated to fill it up out. However, when the survey period was complete, your accommodation learned that the sole individuals who had filled in the surveys were children along with their grandparents!
A big manufacturing company conducted the initial year of the things was made to get Customer satisfaction survey. The very first year, the satisfaction score was 94%. The second year, with the same basic survey topics, but using another survey vendor, the satisfaction score dropped to 64%. Ironically, concurrently, their overall revenues doubled!
The questions were simpler and phrased differently. An order in the questions was different. The format of the survey was different. The targeted respondents were in a different management level. The Overall Satisfaction question was placed after the survey.
Although all client satisfaction surveys can be used for gathering peoples’ opinions, survey designs vary dramatically in size, content and format. Analysis techniques may utilize numerous charts, graphs and narrative interpretations. Companies often utilize a survey to test their business strategies, and many base their business plan upon their survey’s results. BUT…troubling questions often emerge.
Are the results always accurate? …Sometimes accurate? …Whatsoever accurate? Are there “hidden pockets of customer discontent” that a survey overlooks? Can the survey information be trusted enough to adopt major action with full confidence?
Since the examples above show, different survey designs, methodologies and population characteristics will dramatically change the outcomes of market research. Therefore, it behoves a business to create absolutely certain that their survey process is accurate enough to produce a real representation of their customers’ opinions. Failing to do so, there is absolutely no way the business are able to use the final results for precise action planning.
The characteristics of a survey’s design, and the data collection methodologies employed to conduct the survey, require careful forethought to make sure comprehensive, accurate, and correct results. The discussion on the next page summarizes several key “rules of thumb” that must be adhered to if a survey is to turn into a company’s most valued strategic business tool.
Survey questions ought to be categorized into three types: Overall Satisfaction question – “How satisfied are you overall with XYZ Company?” Key Attributes – satisfaction with key parts of business, e.g. Sales, Marketing, Operations, etc. Drill Down – satisfaction with concerns that are unique to every attribute, and upon which action could be delivered to directly remedy that Key Attribute’s issues.
The Entire Satisfaction real question is placed after the survey so that its answer will be impacted by a far more comprehensive thinking, allowing respondents to possess first considered solutions to other questions. A survey, if constructed properly, will yield a wealth of information. These elements of design ought to be taken into account: First, the survey should be kept to your reasonable length. Over 60 questions in a written survey can become tiring. Anything over 8-12 questions begins taxing mdycyz patience of participants in a phone survey.
Second, the questions should utilize simple sentences with short words. Third, questions should demand an opinion on only one topic at the same time. For example, the question, “how satisfied have you been with our products and services?” should not be effectively answered just because a respondent could have conflicting opinions on products versus services.
Fourth, superlatives like “excellent” or “very” must not be utilized in questions. Such words tend to lead a respondent toward an opinion.
Fifth, “feel great” questions yield subjective answers which little specific action can be taken. For example, the question “how will you feel about XYZ company’s industry position?” produces responses which are of no practical value with regards to improving an operation.
Even though the fill-in-the-dots format is one of the most frequent kinds of survey, you can find significant flaws, which can discredit the final results. As an example, all prior answers are visible, which results in comparisons with current questions, undermining candour. Second, some respondents subconsciously tend to find symmetry inside their responses and become guided through the pattern with their responses, not their true feelings. Third, because paper surveys are typically categorized into topic sections, a respondent is more apt to fill down a column of dots within a category while giving little consideration to each question. Some INTERNET surveys, constructed inside the same “dots” format, often cause the same tendencies, especially if inconvenient sideways scrolling is important to answer a matter.
In a survey conducted by Xerox Corporation, over one third of all responses were discarded as the participants had clearly run along the columns in each category rather than carefully considering each question.
TELEPHONE SURVEYS Though a telephone survey yields a more accurate response when compared to a paper survey, they might also have inherent flaws that impede quality results, including:
First, when a respondent’s identity is clearly known, concern over the possibility of being challenged or confronted with negative responses later on generates a strong positive bias inside their replies (the so-called “Cassandra Phenomenon”.)
Second, research indicates that folks become friendlier as being a conversation grows longer, thus influencing question responses.
Third, human nature states that people enjoy being liked. Therefore, gender biases, accents, perceived intelligence, or compassion all influence responses. Similarly, senior management egos often emerge when trying to convey their wisdom.
Fourth, telephone surveys are intrusive over a senior manager’s time. An unannounced call may create a preliminary negative impression in the survey. Many respondents could be partially focused on the clock instead of the questions. Optimum responses are dependent upon a respondents’ clear mind and leisure time, a couple of things that senior management often lacks. In a recent multi-national survey where targeted respondents were offered the choice of a mobile phone or any other methods, ALL chose the other methods.
Taking precautionary steps, like keeping the survey brief and making use of only highly-trained callers who minimize idle conversation, will help minimize the previously mentioned issues, but will not eliminate them.
The goal of a survey would be to capture a representative cross-section of opinions throughout a group of people. Unfortunately, unless most the individuals participate, two factors will influence the results:
First, negative people often answer a survey more often than positive because human nature encourages “venting” negative emotions. A low response rate will usually produce more negative results (see drawing).
Second, a lesser percentage of a population is less representative of the complete. For example, if 12 folks are asked to have a survey and 25% respond, then this opinions in the other nine individuals are unknown and may be entirely different. However, if 75% respond, then only three opinions are unknown. One other nine may well be more prone to represent the opinions of the whole group. One can believe that the greater the response rate, the better accurate the snap-shot of opinions.
Totally Satisfied vs. Very Satisfied ……Debates have raged within the scales utilized to depict amounts of customer care. In recent years, however, reports have definitively proven that a “totally satisfied” customer is between 3 and ten times more likely to initiate a repurchase, which measuring this “top-box” category is significantly more precise than some other means. Moreover, surveys which measure percentages of “totally satisfied” customers rather than the traditional sum of “very satisfied” and “somewhat satisfied,” provide a more accurate indicator of economic growth.
Other Scale issues…..There are other rules of thumb that may be used to ensure more valuable results:
Many surveys offer a “neutral” choice over a five-point scale for those who might not exactly want to answer a matter, or for people who are unable to produce a decision. This “bail-out” option decreases the amount of opinions, thus diminishing the survey’s validity. Surveys designed to use “insufficient information,” as a more definitive middle-box choice persuade a respondent to make a decision, unless they just have too little knowledge to respond to the question.
Scales of 1-10 (or 1-100%) are perceived differently between age groups. Those who were schooled employing a percentage grading system often consider a 59% to be “flunking.” These deep-rooted tendencies often skew different peoples’ perceptions of survey results.
There are several additional details that will boost the overall polish of any survey. While a survey should be an exercise in communications excellence, the knowledge of taking a survey should also be positive for the respondent, in addition to valuable for the survey sponsor.
First, People – Those responsible for acting upon issues revealed in the survey should be fully involved in the survey development process. A “team leader” should be accountable for ensuring that all pertinent business categories are included (approximately 10 is perfect), which designated individuals be responsible for responding to the outcomes for every Key Attribute.
Second, Respondent Validation – When the names of potential survey respondents have already been selected, they may be individually called and “invited” to sign up. This task ensures the person is willing to accept the survey, and elicits a contract to do so, thus enhancing the response rate. In addition, it ensures the person’s name, title, and address are correct, an area in which inaccuracies are commonplace.
Third, Questions – Open-ended questions are typically best avoided in favour of simple, concise, one subject questions. The questions also need to be randomised, mixing up the topics, forcing the respondent to be continually considering a different subject, rather than building upon a response from the previous question. Finally, questions ought to be presented in positive tones, which not merely helps maintain an unbiased and uniform attitude while answering the survey questions, but allows for uniform interpretation of the results.
Fourth, Results – Each respondent gets a synopsis of the survey results, in a choice of writing or – preferably – personally. By providing on the outset to talk about the results from the survey with each respondent, interest is generated in the process, the response rate increases, and the company is left with a standing invitation to return to the customer later and close the communication loop. Furthermore that offer a method of dealing and exploring identified issues on the personal level, nevertheless it often increases an individual’s willingness to participate in later surveys.
A highly structured customer satisfaction survey provides a great deal of invaluable market intelligence that human nature will never otherwise allow use of. Properly done, it may be a method of establishing performance benchmarks, measuring improvement as time passes, building individual customer relationships, identifying customers vulnerable to loss, and improving overall customer care, loyalty and revenues. In case a clients are not careful, however, it may turn into a way to obtain misguided direction, wrong decisions and wasted money.