Thomson Reuters prioritizes data security and the ethical application of AI in the development of CoCounsel, our generative AI (GenAI) assistant tailored for professionals in legal, tax, and accounting sectors.
We spoke with Carter Cousineau, Vice President of Data and Model Governance at Thomson Reuters, to discuss how we ensure the safety of user data in our AI solutions against cyber threats. Carter’s responsibilities include advancing our data security and ethics initiatives.
Q&A with Carter
What strategies does Thomson Reuters employ to safeguard user data and adhere to privacy regulations?
Any project that involves AI and data applications undergoes what we term a “data impact assessment.” While it may sound straightforward, this assessment encompasses a wide range of factors. The model we’ve developed includes data governance, model oversight, privacy considerations, queries from our General Counsel, intellectual property issues, and information security risk management. Our approach began by integrating a privacy impact assessment into the initial data impact assessment.
During these assessments, we refer to specific “use cases” related to Thomson Reuters business projects. We inquire about various aspects, such as:
- The types of data involved in the use case
- The algorithms being utilized
- The jurisdiction relevant to the use case
- The ultimate objectives of the product
This phase is crucial for uncovering numerous privacy and governance challenges.
How frequently do you review and refresh security protocols?
With the advent of GenAI, we established specific guidelines for Thomson Reuters, which are documented and consistently updated. These documents detail mitigation strategies and are regularly reviewed throughout the year. We’ve organized each of our standard protocols based on applicable risk scenarios, and these undergo continual assessment.
Additionally, our Responsible AI Hub serves as an internal resource, centralizing data to foster trust. While some audits and updates are annual, many occur more frequently, with mitigations tracked weekly, if not daily, based on the task at hand.
What protective measures are in place to prevent unauthorized access or data misuse?
Our data access management standards align directly with our data governance policy, ensuring that data owners provide only the minimal necessary information to those requesting access. Many security controls are integrated into our data platform, and we utilize a specific tool for role-based access security.
What achievements would you like to highlight?
I take great pride in our team’s ability to address ethical concepts and AI-related risks. Identifying data risks within the realm of ethics is particularly challenging and requires comprehensive end-to-end risk management. Building the Responsible AI Hub virtually from scratch involved extensive discussions on recognizing AI risks, as well as determining actionable risk mitigation strategies.
I believe that our efforts over the past three years have enabled us to effectively manage AI risks more swiftly than many other organizations.
Discover more about how AI is transforming the future of professional work and enhancing productivity.