First some context. My direct experience over the last decade+ has been mostly B2B selling enterprise grade software to large corporates and so my perspective is skewed this way. I have had many conversations with folks working in the small business and consumer spaces and know those can be very different worlds.
When you sell software in a risk-averse corporate context checklists always come into play. Many many checklists. Feature checklists of course but also many flavors of contractual, security and even internal role/responsibility based checklists. Most of these checklists have to be approved before a sale can be completed, much less a corporate integration.
These checklists presume the business, much less the software, will work in specific ways. Ways that try to make it as easy as possible to line up and compare vendors within a space. Ways that a Machine Learning driven approach come into direct conflict with. Ways that make it very hard to participate in the market as a startup. When you go about trying to do anything in a different way it is inevitable that you come into direct conflict with these checklists.
Now, it is possible to bypass many of these checklists if the customer is sufficiently motivated. This is how we got our first big customers at Safe Banking Systems: They were often under heavy pressure from a government, sometimes even a cease and desist. The current products in the market couldn’t do anything to save them on the timelines they needed. With our AI we could come in and do in a couple of weeks what would take other vendors half a year or more.
Repeated successes led the market start to bend to us. After a few years our reputation allowed us to skip some parts of these checklists. Later we even saw the checklists starting to conform to us, but this took a solid decade and a lot of outreach work.
For example, I personally spent over half a year of my life working on a rather large document for practitioners explaining how our AI worked, why it worked as it did, the process for using that AI, how to interpret it, and why certain processes that were previously necessary were unnecessary and vice versa. Our model choices were heavily constrained by how explainable they were to compliance department bankers.
Also sometimes constraining were the internal structures our customers businesses. After we launched our end to end platform we found one area of friction was that our software required a tiny percent of the number of operators to review results. On the surface this sounds like a big win right? It’s so much cheaper. However, some decision makers measured their power in terms of the number of employees under them and so our software became a threat. Of course, you can always review more records that have a very high probability of being false positives, there could be something in there.
Of course, this is all on top of the typical technical challenges of integrating with large corporates on a data focused project: Data being combined from many systems upstream by an unrelated department. Different departments with different conflicting requirements. Impossible product change requests. Surprisingly onerous clauses slipped into contracts such as “no downtime”. I could spend years writing down the stories.
In the end I believe the key was that our domain was a “small world” and the higher level executives all talked to each other, and as we won them over, us. They were all very risk averse, it was their job to control risk, but it eventually became clear we were the ones to call if you needed to make this very hard problem easy.