Jan 29, 2015Beyond Face Value
By Todd Brown, CCS
Strength and conditioning specialists are constantly presented with new studies that they use to shape their training philosophies. But when considering new information, it’s important to consider the source and the biases and mitigating factors that surround and contribute to any research project.
I attended a training seminar last month on core training. The presenter is a world famous practitioner who has worked with hundreds of people and sold thousands of books, lecturing worldwide.
He preached the importance of various training modalities that will not only better the clients’ physical health, but also performance. Upon returning home, I immediately sat down at my computer and began searching for information on one of the most talked about exercises that I had just heard about for the last four hours. To my dismay, I only found two research studies on this specific movement. Both of which contradicted what I had just spent hours being told (not to mention a few hundred dollars).
The research studies were conducted at a very small university by a Ph.D. who was unknown to my colleagues and me. The results totally contradicted the lecturing expert. Then it struck me. During the seminar, there wasn’t any “hard scientific data” presented that entire afternoon.
All of the colorful and interesting graphs were composed of entirely anecdotal information. So whom should I believe? Should I put my faith in the unknown scientist or the expert practitioner? Surely, the presenter and “authority on the subject” was aware of the contradicting information. How could he not be, since he was a “respected expert” in the exercise field?
Did he discard the research because they were poorly constructed studies, or was it because it didn’t support his few hundred dollars per person seminar? Did the researcher conduct the study to find the “truth” or was she someone that interpreted data with personal bias due to her funding?
I didn’t know who to believe or why. At that moment, I thought that maybe I was asking the wrong question. Maybe I shouldn’t think about who to believe, but why I should or shouldn’t believe. The following information is to help anyone out there in my situation who at times feels caught between the “expert’s” anecdotal information and the scientist that presents the “hard scientific data” that may not support others in the practicing community. A main ingredient to becoming a better performance trainer is to understand that we are all biased to some degree in every facet of our lives. Have you been to the bookstore lately? Did you go towards a specific section or book? Was there a reason why you didn’t walk into the romance section or scientology section? Not interested in those things? Maybe it’s not interest, maybe it’s bias.
Dr. Andrew Newberg, MD, from the University of Pennsylvania lists out 27 ways our brains distort reality through bias that he believes may skew our perception of the world. As the list is substantial, I have broken it down to a few reoccurring themes I observe in the fitness industry (narrowing down the list to what I think is the most relevant may show my bias!):
Authoritarian Bias: Believing people of power and status over others. Confirmation Bias: Emphasizing information that supports our beliefs while unconsciously ignoring or rejecting information that contradicts them. Self-Serving Bias: Maintaining beliefs that benefit only our own interests and goals. Bandwagon Bias: Tendency to go along with beliefs of whatever group we are involved in. Perseverance Bias: Continuing to insist that a specific belief is true even when confronted with contradictory evidence. Persuasion Bias: The more dramatic and emotional the presenter, the more we tend to believe. Publication Bias: Publications that show positive outcomes and tend to exclude work/research that have a negative outcome as well as that anything published must be true. These biases come out virtually every day in the fitness world, from the gym to the field of play. Where do you see yourself? Are you a practitioner or a scientist? Science constantly attempts to solve very specific problems and never seems to be totally sure of the answer. This is exemplified by the repeated use of terms such as “may and possibly” throughout publications in scientific journals.
Conversely, practitioners are focused on improving health or performance, needing to appear confident that what they are doing is the absolutely correct. Practitioners may hide what they are teaching from others in the field. This is how they make their money, so anything that may help them squeeze above the pack is kept quiet.
Science on the other hand is a field of shared published knowledge. At times the performance training community is split by these polar opposites: Science-shared knowledge that at times appears uncertain, and Practitioners- a person that cannot afford to appear in doubt of what they are teaching which is kept under a veil of covert actions. Can these two factions see eye to eye? Is there a way to be in the middle of both sides and still be successful and appear confident in your training methods? Absolutely. In order to obtain the answers, we must first look at both sides of the coin and examine how both sides can be biased. Charts can be overwhelming. When a research paper illustrates certain points with bold graphs the information can be easily distorted. Many times, the dramatics of visuals can skew the information. Always bear in mind that science-based research, although well meaning, has inherent limitations. It is unrealistic to think that all of the possible factors or variables can be controlled at once.
Most of the published research does not present an answer in stone, but more along the lines of a possibility, attempting to shed light on a very specific topic. Proving or disproving a hypothesis does not always lead to absolute truth but to interpretations of the presented evidence. Research evidence at times may largely be based on interpretation. Sometimes these interpretations can be swayed by generalities or even an error. Always bear in mind that there may be multiple biases that result from a lifelong trail of experiences that could play a role in outcomes. In regard to research, it is imperative that we question the who, what, when, and where of the material being presented. Statistics are often invaluable. The numbers help organize information in a way for us to better understand a specific situation. This information can be presented by using numerical values, data points, and comparisons in graphs/visual aids.
It has often been said that numbers don’t lie. Or do they? The numbers themselves may not, but the interpretation of the evidence may itself be limited by underlying beliefs or potential personal gain. This is not to say that research isn’t honest, but that at times possible for the information to be slanted to one side or another for reasons unknown to the audience.
The way the values are presented opens a whole new realm of possibilities. Various interpretations may arise from the same information. As an example, let’s examine a hypothetical situation of a specific training program that claims it will reduce your risk of injury by 50 percent. Superficially, that illustrates promising information that appears to be beneficial for clients. There will be some people that immediately institute this specific training program. However, they must realize that the 50 percent value means nothing if it isn’t given in any context. Instead, look deeper.
For example, if ACL injury prevention program “P” says that in a study, it reduced the incidence of non-contact ACL injuries by 50 percent in female soccer players, it’s saying that females that are not involved in the program are twice as likely to tear an ACL in a non-contact situation. This is also considered to be a 100 percent increase in risk for non-participants.
Based on that preliminary information, if you are an performance trainer, coach, or player, program “P” may be something worth trying. On the other hand, when looking more closely at the study, only two athletes out of 100 thousand tore their ACL in this fashion. So in actuality, program “P”, does cut the risk in half, but the risk is 1/10,000th of one percent. What if the study failed to report that 12 participants fell and broke their ankle during training? Does that change things? Do the risks of the program still outweigh the benefits? Confused? I am!
So let’s review. There is a whole bunch of numbers to choose from: 50 percent decrease in risk, a 100-percent increase in risk, and one in 100,000 chance of incurring this type of injury. People that have put together program “P” will cite one number (or two) that suits their purpose, while others who happen to dislike the research group will cite another.
Again, this is just a hypothetical example illustrating how quickly things can be manipulated and interpreted to support specific parties. This is not to say that research and statistics are slanted–I’m not contending that all research is tainted. Most scientists legitimately attempt quality research and present the data with good intentions in order to help the training community, but taking the time to delve deeper may payoff in the long run. Bear in mind that society as a whole tends to believe most of what it reads and hears–especially if the information is disseminated by a voice of authority. As discussed earlier, if we only select certain articles to read, or lectures to attend, even channels on the television to watch, we are already beginning to give way to bias.
In order to evolve as an performance trainer, it takes courage and time to dig deeper, not taking everything at face value. Although this attitude may at times be mistaken for distrust, in actuality it is about finding the truth and expanding the sphere of knowledge.
As a sports medicine professional, it is wise to approach new studies and reports with caution, as the snippets that are reported are at times interpretations that are cut specifically to fit a certain sized space or time period, potentially leaving out essential information. The media at times reported preliminary results of various studies in order to be the first to release information. This may contradict the actual results once the investigation has been completed. Only in rare occurrences has the same medium reiterated the report with the contradictory results, as it may not be of interest to most. Another, very popular way of learning about a specific topic is the Internet, but as convenient and helpful as it is, there is a downside. As a medical doctor friend told me, “everything in regard to health on the Internet inevitably leads to cancer or death.”
Most websites promote information that may help sales of that specific sponsor. To form a well-rounded, holistic viewpoint of any topic, it is important to investigate the opposite side as well. Although it is much less time consuming to have someone else summarize the information, generally it is more beneficial to investigate the information ourselves.
At times, the oversimplification of information is a good thing, as we can only process so much information at one time, filtering and ignoring the rest, but the moment that the information is processed, bias occurs. Was the information more trust worthy because of the “expert” that presented the information? Did it reaffirm some of the person’s beliefs? Did it reaffirm what everyone else is doing at the gym? It is much easier to walk away with a sense of reaffirmation, than contradiction.
How often have you ignored a website or walked away from a discussion because it was the opposite of what you thought to be true? Not often? If so, you are one of the few, hence Republican vs. Democrat, or Christian vs. Muslim, or in exercise science terms, single set vs. multi-set training (among hundreds of other topics). The Internet, much like other sources can be very helpful if you know where to look, as well as have the open-mindedness to investigate the other side. Not only are periodical and video media susceptible to jumping the gun, but also books that are published by “experts” may not always be 100 percent reliable. Health sciences advance at such a tremendous rate that there is always new information that may conflict what was presented or published a month ago. Keep in mind that the dollar is a large part of the motivation for some people, even “experts.”
It is relatively easy to support a certain belief by only citing studies or specific scientific investigations that agree with the book being written. As the dollar enters the picture, it will often uncover various biases toward the publication. Why? The more popular the book, the higher the sales and earning potential for everyone involved. It is a very difficult thing to tell both sides of the story, especially if there is conflicting evidence that may water down a belief or point that is held strongly by an author. With all of these mitigating factors, how should approach new information? Do not get caught up in the previously cited biases. Take the valuable commodity of time and put forth effort to look deeper into things with probing questions. Keep in mind that science is constantly trying to prove itself without being misunderstood. Conversely, practitioners often feel powerless by research statistics that are thrown in their faces without anything but anecdotal information to counter with to defend their training methodologies. Both sides offer tremendous opportunities for learning and growing as a trainer and can exist together when they support each other with the understanding that the who, what, where, and why questions are aimed at improving the training world and not tearing it down.
Todd Brown, CCS, is co-founder of the Essential Element, a facility for training athletes in Northern Virginia. H is an independent sports science consultant who has worked with MLB, the NFL, and the WPS. He can be reached at: [email protected]
FEEDBACK As a practitioner who has more than 200 independent research articles that support everything I do in the field when training adolescent female student-athletes (more than 600 females in past 14+ years), I recognize that not very many trainers will take the time to verify what is being “sold” to them; especially when it comes from a well known author.
Todd Brown has discussed this area very well and I hope more trainers, etc. take heed to what he is discussing. Just because something works for a 22 year old female does not mean the same exercise should be used for a 15 year old who still has open growth plates.
Plyometrics for young females should not be used, or used judiciously since adolescent females suffer from lower body injuries at a much higher rate than their male peers. Among other items – the lack of the male neuromuscular spurt at adolescence is just one reason that until the young female demonstrates balance, neuromuscular control, and proprioception for each joint – maybe plyometrics on land should be avoided. I hope more practitioners will listen to what Todd is discussing and not just rush into the latest fad in training youngsters.
SMART training with less weight and more reps provides the lean muscle most females are seeking. – Warren Potash, Fitness Therapist and Athletics Operations Assistant California Lutheran University Athletics – Equipment Room