We are moving toward a society controlled by algorithms, but very few of us actually understand how they work. This asymmetry of information is a recipe for disaster. Case in point: Recently in the U.K., an algorithmic failure put the lives of 450,000 woman at risk through a technical error that inhibited their ability to detect breast cancer.
Unfortunately, this is not an anomaly, and if the tech industry doesn’t take the lead on imposing oversights to our algorithms, the government may create its own regulations — causing roadblocks to innovation.
We have seen time and time again the mistake of placing our blind trust in algorithms. Even our best intentions can go awry when we’re working with something we don’t always understand, which has the ability to scale globally almost instantly.
This isn’t a new concept. For example, since the early 1900s, “scientifically proven” was the trend in innovation, which bled into marketing — only a few people with highly specialized knowledge, in this case scientists, had the esoteric research along with understanding of DNA and biological sciences. Most people blindly believed this research, and it was exploited to sell products. By the early 1990s, “data driven” beat out “scientifically proven” and became the de rigueur buzz phrase — anything data driven (or data-related) must be correct because the data said so, and therefore one should trust us and buy referenced products.
Now that has been superseded by terms like “AI” and “machine learning” — still part of this knowledge only understood by a few that is being used to sell products.
For years, these terms and approaches have been guiding myriad choices in our lives, yet the vast majority of us have just had to accept these decisions at face value because we don’t understand the science behind them.
In an age in which many aspects of technology could still be considered the “Wild West,” and tech gurus “outlaws,” I contend, as a whole, that this is a problem we should get in front of rather than behind. It is imperative that companies should voluntarily prescribe to Algorithmic Audits — an unbiased third-party verification. Much like a B-Corp certification for companies, these external audits would show that one’s company is doing the right thing and course-correct any biases.
If we don’t take a firm lead on this type of verification process, the government may eventually step in and impose overly cumbersome regulations. The oversight required to do so would be nearly impossible and would eventually impede progress on any number of initiatives.
Technology adapts faster than even the technology industry can handle, and so adding a layer of governmental bureaucracy would further throttle innovation. Data science is like every other science, requiring experimentation and beta testing to arrive at more effective technologies; regulation would stifle this process.
We’ve seen similar occurrences before; for example, before insurance companies can work their data into their actuarial models they need to be certified by the State. There is a growing movement in cities and at companies to address bias in algorithms. Recently, New York City assembled an algorithm task force to look at whether its automated decision system is racially biased. According to a State Scoop article, “The City uses algorithms for numerous functions, including predicting where crimes will occur, scheduling building inspections, and placing students in public schools. But algorithmic decision-making has been deeply scrutinized in recent years as it’s become more commonplace in local government, especially with respect to policing.”
The tech industry funding a research council, with the goal of creating best practices to elevate the quality of algorithms, is far better than the alternative. According to Fast Company, algorithms now even have their own certification, “a seal of approval that designates them as accurate, unbiased, and fair.” The seal was developed by Cathy O’Neil, a statistician and author who launched her own company to ensure algorithms aren’t unintentionally harming people.
In the effort to practice what I preach, we did exactly this at my firm, Rentlogic, a company designed to give apartment buildings grades based on a combination of public data and physical building inspections. Because our ratings are based on an algorithm that uses public data, we wanted to ensure it was unbiased. We hired aforementioned Weapons of Math Destruction author, Cathy O’Neil, who spent five months going through our code to prove it faithfully represented what we say it did. This is paramount for creating trust from the public and private sectors as well as our investors; people now care more than ever about impacting in companies creating a positive impact.
With more and more stakeholders turning their attention to algorithms, I hope we will see more firms independently doing the same. In order for the tech industry to maintain integrity and faith in algorithms — and the public’s trust — we must take it upon ourselves to seek third-party audits voluntarily. The alternative will be disastrous.