Then there’s another point that is often overlooked: the importance of talking about both regulation and governance. I believe a lot of the necessary governance is actually embedded in the technology itself and the way it is set up, but we also need policy innovation for these new technologies.
When regulators are embracing cutting-edge technology, they need to support experimentation, sandboxing, and testing new things. The trouble is that regulators often lack the bandwidth and resources to embrace the kind of agile, iterative approach that we actually need. It’s also unlikely we will see one single regulatory framework emerge.
When it comes to what “good” looks like, it’s going to be a collective learning journey, and as an LP, you have real insight into what your GPs are doing, what works, and what doesn’t on this highly complex topic.
Suzanne: Interestingly, we’ve seen that it's very often founders that recognize the power of the technology that they're creating, and they are really struggling with their legacy. So turning to their trusted GP for guidance makes absolute sense.
Getting governance right around this level of complexity starts with establishing appropriate policies that define the issues that need to be considered and setting up steering committees led by the right people who know the right questions to ask—even if they might not have all the answers right now.
Once that governance is in place, there’s a natural progression through to implementation. In our recent whitepaper, we cite some of the work of the National Institute of Standards and Technology (NIST) in the US, which has a very considered approach to risk management, and we've looked at adapting that to the GP and LP level. What NIST is talking about is implementation with regard to development, evaluation, testing, auditing, and feedback loops. None of this is surprising, but doing it well within this AI context—and doing it well for specific applications—is where we’re seeing the real learning curve emerging.