by Mark Gritter | at MinneBar 10 | 9:15 – 10:05 in Discovery | View Schedule
Certainly any algorithm that's been peer-reviewed multiple times must not have any obvious errors, right?
What about algorithms by leaders in the field, which come with proofs of correctness, and form the basis for tons of later research? Nothing of that stature could be flawed, could it?
But those are purely academic concerns, with no practical impact. It surely couldn't be the case that something as basic as a sorting algorithm, which was implemented multiple times and tested, fails to operate correctly?
In fact, I'll show examples of all three of these. Let's have a conversation about the ways in which algorithms fail--- and the ways to increase confidence that your algorithms and designs are correct.
Mark Gritter is a Founding Engineer at Akita Software, his fourth startup experience, building API observability. Mark formerly worked at HashiCorp on the Vault team; co-founded Tintri, an enterprise storage company that IPOed in 2017; and was a day-one employee at Kealia, a video streaming startup acquired by Sun Microsystems in 2004.
Mark's previous Minnebar presentations have covered topics such as correctness of algorithms, combinatorial auctions, scaling a startup, building a file system, and procedural content generation.