- Decis SITREPS
- Posts
- Unknown Knowns
Unknown Knowns
“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.”
Defense Secretary Donald Rumsfeld February 12, 2002.
In addition to Rumsfeld’s three variations, there’s a fourth category — unknown knowns — that isn’t talked about as much but it’s relevant to AI, particularly for risk managers.
We can broadly define unknown knowns as ‘things that are known, but we don’t or can’t access that knowledge‘. We’ve all felt this way when we know we’ve read a report or seen a critical piece of data somewhere; we just can’t recall where. It’s the inaccessibility that causes the issue.
Considering AIs, unknown knowns are both an opportunity and a limitation, particularly for security and risk managers.
First, the opportunity. LLMs are very good at taking vague questions and trawling vast amounts of data to find plausible answers. This approach is very different from traditional boolean search, where the machine only searches for the exact terms and parameters you give it. More broadly, you can ask an LLM to examine a large amount of data and look for trends or patterns, meaning that we can ask them to analyze the enormous amounts of data we accumulate. These data hoards are so massive that we’d never be able to interpret them manually, but with an AI, we can extract hidden knowledge that’s otherwise inaccessible. So AIs open up access to these otherwise unknown knowns.
Unfortunately, this only works when the data is in a format that the AIs can access, which is where the limitation occurs because a considerable amount of human knowledge and understanding isn’t written down or recorded. That’s particularly the case in a professional setting.
Imagine the amount of data passed around in an executive meeting. We have hours of speech, images, graphs, whiteboard illustrations, scribbled notes, gestures, and facial expressions summarized in a couple of lines of meeting minutes. e.g
The leadership team discussed the need to expand into new markets in Europe and agreed to begin operations in Germany’
This gives us zero insight into how the team came to that decision. Domains where we do have meticulously recorded notes — such as medicine or scientific experimentation where note-keeping is an essential part of the process — are the exception, not the rule.
So when it comes to training an AI on more abstract processes like decision-making, the knowledge isn’t accessible by the AIs: it’s known to us, but it’s unknown to them. So, unless we start recording a lot more of our strategic discussions — which I think is highly unlikely in most organiazations — the knowledge will remain inaccessible.
The result is either AIs with huge blind spots because there is little or no knowledge on which they can train, or, more likely, AIs that are very narrow and heavily biased towards one thought pattern or personality. That’s why it’s much easier to build a strategic AI that mimics Josh Waitzkin or Claire Hughes Johnson than it is to train a generic Deloitte AI.
How does that affect risk managers specifically? First, we have this tremendous opportunity to put the vast amounts of untapped knowledge to work. This will give us meaningful, actionable insights to help our businesses succeed.
But it also means that if we’re hoping for the risk management version of Harvey.ai, the legal co-pilot, we might be waiting a long time.
Because, unlike the law where there are much more precise references, the security and risk management domain is fragmented into multiple fiefdoms that operate independently. Even within each domain, there’s no one source of truth.
Moreover, much of what risk managers do, particularly operational risk managers, is based on experience and situational analysis: exactly the kind of knowledge that’s not recorded in detail.
So, there’s not a straightforward library or body of references to point an AI at to train.
That might seem like a great advantage: if we can’t train security.ai, then it can’t take over our jobs. But that’s a very shortsighted perspective because someone will transfer a security.ai, but it won’t be very good, so your job will be taken by a second-class robot.
Plus, if we don’t have security co-pilots, we are surely missing out on all the advantages they offer.
So we should take this opportunity to unearth these unknown knowns and start training security and risk co-pilots. How?
Record your thought processes.
Document your procedures.
Codify the steps you take to work through complicated issues.
That way, you’ll be able to train your own AI or contribute to the SME version for your area of expertise.
Otherwise, we’ll end up with a security and risk AI built by non-SMEs, one that sounds plausible but has huge blind spots because of the unknown unknowns.