Recently I read the book “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations” by Nicole Forsgren, Jez Humble and Gene Kim. It is a very popular book that combines some great advice for organisations that want to become more productive when delivering software. As Jez Humble is involved, you could imagine that the Accelerate book is primarily focuses on DevOps. As a foundation for the advice, the book uses the DORA (DevOps Research and Assessment) reports that are based on their yearly survey. The book also explains how they conduct the research and analyse the results.
I don’t think there is a doubt in the community about the advice that the book gives. However, the research itself is seen by many as a controversial foundation. It is mainly because the source of data for the research is a survey. Authors of the book, however, argue that the statistical methods used in the research, chosen variables and correlations were carefully chosen and the whole research was peer-reviewed as part of the publication process.
Here I come to the situation that I noticed not long ago. For some reason, the words “peer-reviewed research” become an equivalent to “the absolute truth”. For some reason, people tend to forget a number of sound examples when new research partially or entirely invalidated or challenged outcomes of previous researches.
I see the most obvious reason for the phenomena in the fact that the IT community doesn’t have a proper connection to any science whatsoever. It is enough to look at the magnitude of research in computer science that built a foundation for the entire industry. Most of that foundation was laid down in the 1950s to 1970s and not many things have changed since. At the same time, we can clearly observe that developers keep making the same mistakes by ignoring all the prior research, going back in circles and discovering “new” things that are in fact quite old and well-known.
As the community being not involved in the research, we tend to see science as something alien and out of our daily work. When someone writes a paper and manages to publish it in a scientific journal after a peer review, we see it as the conclusions from the paper are written in stone. At the same time, we enjoy reading mass-media articles reporting some obscure findings in the fields of science that aren’t related to computing, without the slightest clue what practical meaning that science might have.
Here I come to surveys and social science. I might be wrong here, but many of us see psychology, sociology, politology and other social science as something vague and often biased. If I take politology as an example, politicians often hire experts in the discipline to help with winning their campaigns. Despite a lot of effort from those experts, we can hardly determine if they produce any effect on the campaign. Sociology seems to be much less advertised and known by public. It also has issues with formulating specific findings because most of the research is based on surveys and doesn’t last long enough to conclude long-lasting effects on some events or drivers on the society. It is also very hard to perform clean tests with separated and isolated groups of subjects simply because those subjects are people, and they are usually not ready to spend years in isolated groups, being observed by researchers.
Now I want to come back to Mr. Humble. He is a passionate person and has a strong moral compass, I must admit. But being acquainted with his style of persuade people towards his beliefs, I have serious doubts of his qualities as a researcher.
I had a dispute with Jez on Twitter last year. It was about woman not being involved in STEM and in IT in particular. He claimed that Europe has even a bigger problem than the US in this area. In his opinion, the confirmation for that being a problem is the fact that there’s still a large degree in both genders equally present in the field despite many Europeans counties score very high in the human development index.
If you click on the HDI link above, you’d find that Norway scores highest in the index, however we still observe the dominance of male in IT. My point in the discussion was that there might be other reasons than simple discrimination, which lead to such a situation.
This is a weird approach. Norway is number one for human potential development. I think you’re firing the wrong target here.— Alexey Zimarev (@Zimareff) October 3, 2017
I personally find the “50% of the population” statement fundamentally flawed. There is a plethora of factors on multiple levels, starting from an individual, through family to the society as a whole that influence how people chose their career. The HDI index gives a rough indication how free people in a certain country to choose what they want in life. When the choice is there, the society might still be biased, there is no doubt. Jez Humble mentioned the research that explored such a phenomenon:
Actually you are wrong, this is a known phenomenon with societal causes: https://t.co/9P1bJdZCCX— Jez Humble (@jezhumble) October 3, 2017
The research was conducted by Gijsbert Stoet, Drew H. Bailey,Alex M. Moore and David C. Geary and published in 2016. It is a very interesting read, and I recommend everyone who wants to understand how the society can influence decisions of young individuals and press gender roles, to read it.
My point, however, was different.
OK you are literally just shouting slogans while I am giving you peer-reviewed science. Try "choosing" to actually read the evidence.— Jez Humble (@jezhumble) October 3, 2017
As a response, Jez shut the discussion down, sarcastically recommending me to “choose” and read the article (which I already did). Hence, the “peer-reviewed science” being the center of the sentence there.
Here comes trouble with “peer-reviewed science” being seen by academia armatures as the final truth.
In 2018, the same group of scientists published another research, called The Gender-Equality Paradox in Science, Technology, Engineering, and Mathematics Education.
In the paper, a pair of psychologists — Stoet and David Geary of the University of Missouri — found that across most countries, girls are as good as boys, and often better, at math and science. But in countries with greater gender equality like Norway and Finland, women make up less than 25% of college graduates in STEM fields. In and of itself, this gender gap isn’t news. But the researchers theorized that because these countries tend to be richer, women have the financial freedom to pursue their natural interests — which drives them more toward the humanities.
The research came closer to the discussion, but the funny part there is that the same group of scientists as Jez Humble was referring to in our short conversation, were discussing the same matter that I was trying to convey.
The findings will likely seem controversial, because the idea that men and women have different inherent abilities is used by some to argue that we should forget trying to recruit more women to the STEM fields. But, as Janet Shibley Hyde, a gender-studies professor at the University of Wisconsin who wasn’t involved with the study, put it to me, that’s not quite what’s happening here.
“Some would say that the gender STEM gap occurs not because girls can’t do science, but because they have other alternatives, based on their strengths in verbal skills,” she said. “In wealthy nations, they believe that they have the freedom to pursue those alternatives and not worry so much that they pay less.”
The More Gender Equality, the Fewer Women in STEM - The Atlantic by Olga Khazan
I find it quite amusing that it was almost the same idea that I suggested to Mr. Humble, which he immediately denied bye referring to an earlier research by… the same group of scientists? I have to admit that it’s quite ironic.
However, that’s not the only point here. Remember now that the research was peer-reviewed, otherwise they won’t be able to publish it. But another group of scientists from Harward’s GenderSci Lab questioned the methods and the conclusions of the research.
Sarah Richardson, a science historian at Harvard University, told BuzzFeed News that the study authors used a “very selective set of data” to produce a “contrived and distorted picture of the global distribution of women in STEM achievement.”
The GenderSci Lab researched challenged the Stoet group findings in public and even tried to take down their publication as not being done according to the standards of data processing. In response, Stoet’s group published a reply, which states, among other things, the following:
We hypothesize that men are more likely than women to enter STEM careers because of endogenous interests (Su, Rounds, & Armstrong, 2009). Societal conditions can change the degree to which exogenous interests influence STEM careers (e.g., the possibilities of STEM careers to satisfy socioeconomic needs). But when there is an equal playing field and studying STEM is just as useful (balancing income and career satisfaction) as a degree in other areas, people are better able to pursue their interests and not simply their future economic needs.
Apparently, the magical peer-review does not make the research unflawed and doesn’t convert it to something that can’t be challenged or even being correct in conclusions.
I am not trying to argue about any of the research or their conclusion. As you can see, every research can be challenged and questioned and there is no magic in peer reviews. Also, Sarah Richardson said:
Cultural patterns around women’s achievement in and preferences for STEM are incredibly complex and incredibly diverse across the globe.
My concern is how people that get from an industrial sector to academia start to assume that everything that is published after being peer-reviewed automagically becomes undoubted truth. Such an esteem gives them a stimulus to shut down any discussion and any opinion that contradicts their own point of view, after pleasing their confirmation bias by a selective set of research papers. I would therefore question their own research as being unable to conduct a meaningful conversation for me is a sign of rigidity that is completely unacceptable for people who are involved in science.
In case of that particular episode with Mr. Humble, I even attempted to get int ouch with him and sent links to new papers published by the Stoet’s group (which he appeared to trust). Jez replied that he has no time to waste on reading anything else on the topic that he already has a formulated opinion about. For me it is just one more argument towards the theory about prevalence of the band wagon effect in the IT industry in general, where a group of influential individuals form the agenda for the whole community and make their own views mainstream. Then, doubts and discussions about their controversial views become obstructed and declared marginal, effectively stopping any discussion.
- Countries with Higher Levels of Gender Equality Show Larger National Sex Differences in Mathematics Anxiety and Relatively Lower Parental Mathematics Valuation for Girls
- The Gender-Equality Paradox in Science, Technology, Engineering, and Mathematics Education
- The More Gender Equality, the Fewer Women in STEM
- STEM Gender Equality Paradox Study Gets Correction
- If not a paradox, then what? 7 alternative explanations for the inverse correlation between the Global Gender Gap Index and women’s tertiary degrees in STEM
- The Gender-Equality Paradox Is Part of a Bigger Phenomenon: Reply to Richardson and Colleagues (2020)
- Human Development Index - Wikipedia