Reflections from a GTI webinar with Matthew Hunt and Jonathan Armstrong Exploring accountability, data quality and the evolving role of human oversight
There’s little doubt that the emergence of AI has had (and will continue to have) an impact on due diligence. Businesses are incorporating AI-enabled tools into their processes, transforming the way that compliance teams work. But using these tools isn’t without challenges, signalling the need to understand the implications of how AI is being used.
This was the topic in a recent conversation between Matthew Hunt, COO and Co-founder of GTI, and Jonathan Armstrong, Partner at law firm Punter Southall Law. They delved into the impact of AI on due diligence, legal obligations, and the evolving role of AI in professional settings. Specifically, Jonathan Armstrong shared his insights on AI’s role in transforming due diligence and investigations, the importance of data quality, and the challenges posed by AI in legal and compliance frameworks.
Here is a summary of what they spoke about:
The quality of data
At the heart of the discussion was a simple principle: the reliability of any AI system depends on the quality of its data. Free or open-source models, often trained on large undifferentiated datasets, may draw from incomplete, biased or manipulated information. By contrast, pre-curated or specialist datasets provide a stronger foundation for compliance-grade research and analysis.
Publicly available content, particularly in markets with limited press freedom, can reflect the interests of those powerful and wealthy enough to protect their reputations, rather than reality. This makes data provenance a central concern for anyone relying on automated screening or investigative tools.
Fit for purpose
A growing divide is emerging between fit-for-purpose AI systems and those built on unreliable or poorly structured data. Many models rely heavily on generic sources, which are not always suitable for professional or regulated use.
In some sectors – notably law, healthcare and financial services – the cost of obtaining and maintaining high-quality data is likely to rise as ownership becomes concentrated among a few major providers. Other industries may move towards more open ecosystems. Either way, “good” AI will likely not continue to be free or subsidised forever, and responsible organisations will need to weigh both data quality and commercial sustainability when choosing technology partners.
Changing investigations and rising expectations
AI is already reshaping investigative practice. Automated systems can rapidly correlate information from diverse sources, uncover hidden connections and identify anomalies that might previously have gone unnoticed. These capabilities allow teams to disprove false claims, detect risk patterns and make faster, better-informed decisions.
However, as such tools become more accessible, regulators are raising the bar. The existence of affordable and capable AI solutions means that organisations are increasingly expected to use them. In practical terms, the legal duty of care in due diligence is expanding: failing to explore available tools could itself be seen as a compliance weakness.
Balancing automation with expertise
Despite these advances, automation cannot yet fully replace human expertise or judgement. AI is best used as a triage mechanism; a way to process large datasets, surface potential risks and prioritise cases for review before specialists apply critical judgement to interpret the findings.
As Armstrong said: “AI will get you part of the answer. It might get you to a provisional answer more quickly, more efficiently, more cheaply, but sometimes you’ll still need that expert assurance.”
Human oversight remains essential, particularly in high-risk jurisdictions or politically complex environments where context and local knowledge shape interpretation.
Accountability and defensibility
The discussion made clear that technology cannot shoulder accountability. Organisations must be able to explain how they use AI, verify the accuracy of its outputs and document the steps taken to manage risk. “The system told me so” is not a defence.
Best practice involves recording the methodology behind AI-assisted decisions, carrying out sampling and validation, and maintaining auditable evidence of oversight. A structured and documented process – showing how inputs, outputs and reviews were handled – is the foundation of defensible use.
Legal and regulatory frameworks
The regulatory landscape is tightening. Under the EU AI Act and the GDPR, organisations face obligations around transparency, literacy and fairness in automated processing. Data protection impact assessments are becoming an essential governance tool – even in jurisdictions where they are not yet mandatory – helping teams identify risks of bias, discrimination or lack of transparency before systems go live.
Other legal considerations include managing profiling activities, ensuring that data use remains proportionate to purpose, and applying appropriate weighting to different evidence sources.
Managing bias and political influence
Bias in AI-generated results remains a significant concern. Automated watchlist and adverse-media tools may inadvertently amplify political agendas or reflect censorship in regions with restricted press freedom. In these situations, automated outputs must be balanced with contextual review.
Due diligence teams are advised to apply evidential weighting, giving greater value to sources with verified provenance and treating unverified data with caution.
Governance and literacy
As AI becomes embedded in compliance and risk management, understanding how it functions is now part of professional competence. Organisations are expected to ensure their staff and their suppliers can explain and justify how AI is used in their processes.
This requirement for AI literacy extends beyond technical skills: it encompasses the ability to question outputs, interpret findings responsibly and identify when expert assurance is needed.
A new standard of care
The overall message of the webinar was clear. AI is changing what “reasonable diligence” looks like. Automation can dramatically extend reach and efficiency, but it does not absolve organisations of responsibility. The legal duty of care now encompasses not only the decision itself but also how that decision was informed by technology (i.e. ‘explainability’).
Effective due diligence in the age of AI means combining scale and speed with human discernment and judgement, legal awareness and transparent governance – ensuring that automation strengthens, rather than weakens, professional accountability.