Lessons from the Vietnam war on the limits of being Data Driven
When fighting a war, one important thing to know is whether you are winning or not!
Before the Vietnam war, the US military had fought in 3 major conflicts during the 20th century — WW1, WW2 & the Korean war. All of these were more conventional wars where you fought over land. Thus, whether you were winning or losing any of these wars was dictated by the expanse of land you had control over and whether you were gaining ground or ceding it. When Hitler was able to go to Paris, the Nazis were winning the war. And when the Allies got to Berlin, the Nazis had lost. Pretty simple!
However, the Vietnam war presented different predicaments. In this war, the US generally had control (or could take control) of the urban regions while the Viet Cong (VC) were mostly in control of the surrounding jungles. So, instead of a classical war map where the lines of control were distinct and differentiated with clear demarcations, this looked more like a Dalmatian dog with the US controlling the spots and the Viet Cong dominating the intervening white. A different way of knowing how the war was progressing needed to be devised.
Enter Robert McNamara. Robert McNamara was the US Secretary of Defense from 1961 to 1968 under both John F. Kennedy and Lyndon B. Johnson. Prior to this, he was the President of Ford Motor Company where he had been instrumental in encouraging the use of spreadsheets, graphs and data in general. As Secretary of Defense, he applied this data-driven mindset to the Vietnam war. This led to the compilation of large amounts of data on every detail imaginable to facilitate decision making in a more scientific and rational way. Plans were even being made to withdraw in 1964 as the statistical models said that the war would be won by then. But the main metric that McNamara cared about and which for him overrode all others was “body count” — the number of enemy soldiers killed. As per him, the winning side obviously had a lower number of deaths. This was to become the ultimate way of knowing how the war was going and what the winning chances were.
In the absence of any better way of knowing who was winning, this seemed like a pretty good metric. After all, if you looked at previous wars, it was obvious that periods when you were inflicting heavy losses on your enemy correlated highly with gaining land and getting closer to victory.
So, how did this all turn out in the Vietnam war?
Well, according to the data, it went brilliantly. The official US Department of Defense figure accounted for 950,765 communist forces killed in Vietnam from 1965 to 1974. The US & South Vietnamese figures had been estimated to be between 250,000 and 300,000. Clearly, this was a resounding victory for the United States. The spread of communism in south-east Asia was stopped dead in its tracks, global respect for the US grew and the citizens at home were united in their pride at such an achievement.
Unfortunately, the reality didn’t quite match with how well the data said the war actually went. On the 29th April 1975, in the largest helicopter evacuation in history, the US fled Saigon. One day later, the city fell to the Viet Cong, South Vietnam surrendered and the war ended in a humiliating defeat for the US. In that same year, communists took power in both Laos and Cambodia and the US public was more divided than it had been since the civil war over 100 years earlier.
So, what went wrong? How was the data at such odds with reality?
Well, the primary problem was that the goal of the US involvement in Vietnam wasn’t to kill as many Vietnamese as possible. The aim was to try to prevent the spread of communism in Southeast Asia. Having the army optimise for ’body count’ was frequently at odds with this goal. An example of this is the Battle of Hamburger Hill where 72 US soldiers died taking a hill ‘that had no military value whatsoever’ only to abandon it almost immediately. The justification for this, according to Major General John M. Wright was that “We found the enemy on Hill 937 and that’s where we fought him”
This fixation with body count also led to widespread misreporting of civilian deaths as VC casualties. The rule of thumb became “If it’s dead and Vietnamese, it’s VC”. It’s estimated that approximately 220k civilian deaths were recorded as VC dead. The most shocking example of this is the My Lai Massacre, where US soldiers murdered approximately 500 unarmed civilians. This was officially reported as a success with 128 VC combatants killed though there had been no such casualties in reality. The participating soldiers were even praised by General Westmoreland for ’outstanding action’ and that they had ‘dealt the enemy a heavy blow’.
In our modern world where we have data on just about everything, what are the lessons we can learn from the mistakes of the Vietnam war?
Correlation does not imply causation
In previous wars, a high enemy body count correlated strongly with strategic gains in the larger war effort. However this doesn’t mean that a high body count in and of itself caused these gains. From a business point of view, if you see that a certain metric correlates with some positive outcome, this doesn’t mean that increasing this metric will cause more of that positive outcome to happen.
Optimise reality and let the metrics follow
A common problem with using data to better understand reality is that the data can quite easily become what we interpret as reality. We think about how to optimise these metrics and start framing problems in terms of metrics that need improvements rather than customer needs that need fulfilment (or strategic military objectives that need to be achieved). In the Vietnam war, we saw how the ‘body count’ metric became the goal in and of itself. What should have happened was that decisions should have been taken with a view to defeating communism and ‘body count’ treated as a useful but imperfect metric to give an idea how things are going. This dynamic is best captured in what’s known as Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.”
All models are wrong but some are useful
Always try to remember that any models you build or data-driven predictions you make are never going to be a perfect reflection of reality and should be treated with a healthy dose of skepticism. There are always going to be weird and wonderful ways in which data could be telling us the completely wrong story. After all, Robert McNamara proudly proclaimed in 1962 that “every quantitative measurement …. shows we are winning the war”