Data Security in the Age of AI: Safeguarding Sensitive Information – Q2 2025 Facts & Findings
AI IN THE LEGAL INDUSTRY: BEST PRACTICES
As AI and machine learning are increasingly applied to law practice, they bring benefits and dangers. On the one hand, they provide enormous efficiency in rote task execution and, with more recent advances, analysis and decision-making. On the other hand, if not applied thoughtfully, they can introduce material risks to information privacy and data security. The legal field is attractive to cybercriminals due to the highly confidential and sensitive information. Mismanaged use of AI can introduce new vulnerabilities for them to exploit.
A data breach can have grave consequences for any firm: economic loss, reputational damage, regulatory penalties, and legal liabilities. As AI systems’ role in managing and analyzing sensitive data becomes more significant, lawyers must improve their data privacy and cybersecurity rigor.
Legal professionals must embrace best practices to protect sensitive information from AI-related risks. Generally accepted and fundamental practices in secure data management still apply to using AI tools but with increased urgency. These practices and processes are critical when your firm manages its own systems. When your firm utilizes external services and systems, it is imperative to ensure that the providers follow best practices. The key practices and system implementation areas are encryption, data masking or anonymizing, and process controls and validation.
ENCRYPTION
At the core of data security, encryption converts data into a code only authorized parties can decrypt. Since AI platforms tend to be external services, it is imperative for legal professionals to use end-to-end data encryption at rest (i.e., wherever the data is stored and processed in your system and the AI platform provider’s system) and in transit (i.e., whenever data is being sent or received over the internet through your AI platform provider’s system) to ensure clients’ sensitive data is protected. Encryption safeguards data from interception by cybercriminals.
We should see increased adoption of an emerging encryption approach called homomorphic encryption, which enables encrypted operations on encrypted data rather than plaintext data. This technology further ensures privacy by averting the risk of inappropriate data access while data is decrypted for processing using standard encryption/decryption schemes.
DATA MASKING OR ANONYMIZING
Data masking conceals information in a database by replacing it with substituted content, making it impossible for cybercriminals to obtain sensitive details. This is important during system development and testing, especially when real data might be in danger due to vulnerabilities within the testing environment. Legal professionals can be confident that their masked data is secure even in less controlled environments. This is especially important when introducing AI platforms to the system since they are generally external to the system’s environment.
PROCESS CONTROLS AND VALIDATION
It is critical for businesses and organizations to conduct regular security audits and vulnerability assessments aimed at identifying and mitigating the potential exposure of sensitive data. Controls and audits should be technical and operational. Audits of controls such as SOC-2 will help ensure your firm is following sound operational practices around information security and privacy. It is equally important to conduct regular broad synthetic technical assessment exercises to identify system vulnerabilities. Exercises like external and internal penetration testing, compatibility checks, and application vulnerability scanning are more important than ever. Third party risk assessments of your AI platform providers should demonstrate the effectiveness of their audits of their controls. This proactive approach ensures that all measures are current and effective in defending against cyberattacks.
LEVERAGING ADVANCED TECHNOLOGIES FOR ENHANCED DATA SECURITY
CYBERSECURITY TOOLS
Cyberattacks’ continually advancing nature necessitates using improved technologies to ensure data security. AI can continuously process large datasets in real time to identify patterns and anomalies that threaten our systems. When used as part of your cybersecurity arsenal, AI can provide early warnings of threats, which can then be blocked or mitigated. Check that your cybersecurity tool kit addresses AI-based hazards and leverages AI to defend against threats and identify vulnerabilities.
VERIFICATION OF AUTHENTICITY USING BLOCKCHAIN
AI adoption may exacerbate the challenge of content authentication. When machines can generate and edit content, how can we guarantee that similar or derivative content produced by trained professionals is not compromised? Blockchain technology provides a tamperproof decentralized ledger for transactions and content. It is used in the legal industry to execute contracts, case files, and other vital documents that parties send, confirm, revisit, or change. We can expect blockchain to be more commonly used to certify content’s authenticity.
BUILDING AND MAINTAINING A PROACTIVE DEFENSE POSTURE
With the emergence of generative AI, the cybersecurity stakes are higher than ever. It is time for firms to redouble their vigilance and efforts around data privacy and security. All the best practices still apply, with the aforementioned points gaining emphasis where the use of AI is concerned. Beyond that, the following considerations should be covered.
It is paramount to choose the right AI vendors to ensure data security. Third-party risk assessments must be updated and applied to any service or system provider using AI. It is critical to review security policies and practices as well as validation and certification artifacts. When using AI, ask these important questions:
• Does the service provider store your or your client’s data on their systems after processing? If so, how, why, where, and for how long?
• Does the service provider train their AI model using your or your client’s data? If so, is that acceptable? (Typically, it is not acceptable for law practices.)
• How does the service provider ensure that bias, hallucinations, and regression do not adversely affect the accuracy and quality of their service?
Data ownership is a critical concern when vetting AI vendors. Firms should ensure they retain ownership of their data, and vendors should not be able to use data beyond the scope of the agreement.
Understanding how vendors allow access to the data and with whom it is shared is critical to maintaining security. Vendors must have clear policies outlining which people can access the data and under what conditions, as well as any other third parties with whom the data might be shared. Firms should also ask about the transparency of the AI system. A vendor should be able to explain how its AI models work and how it ensures the data fed to those models is secure.
By following best practices in cybersecurity and properly vetting AI vendors, firms can guarantee that their organization and the organizations they partner with protect the privacy and confidentiality of client and firm information. By redoubling these efforts, firms can reap the benefits of reduced cybersecurity risk and enjoy the immense advantages that AI promises.
Tony Donofrio is the chief technology officer at Veritext. He develops and supports the mission-critical systems clients, reporters, and employees use daily. He focuses on ensuring clients and staff have the best experience with easy-to-use, highly reliable, and highly secure tools.