Solversa Legal Solutions

Solversa Legal SolutionsSolversa Legal SolutionsSolversa Legal Solutions
  • Home
  • Private Equity
  • US Market Entry
  • Fractional GC
  • Articles
  • About
  • More
    • Home
    • Private Equity
    • US Market Entry
    • Fractional GC
    • Articles
    • About

Solversa Legal Solutions

Solversa Legal SolutionsSolversa Legal SolutionsSolversa Legal Solutions
  • Home
  • Private Equity
  • US Market Entry
  • Fractional GC
  • Articles
  • About

Solversa Legal Legal Services

Navigating the Wild West of AI Compliance 

Without Losing Your Sanity - or Your Sales.


Sally Baraka, Esq.


Introduction


Congratulations – your product development team is building a new AI tool. Because of your global customer base, you are responsible for ensuring this product complies with the rapidly evolving AI regulatory landscape, while also enabling your sales team to close deals with customer-friendly terms. This guide is for in-house lawyers and engineers, designed to help you chart a path forward to product launch and beyond.


Pro Tip:  For in-house legal teams, start by educating your development team about the AI regulations that apply to your business and customers. Engineers are often logical problem solvers.  Giving them some background in regulatory landscape within which the new AI tool will be operate, can lead to creative, compliant solutions.  Remember, compliance is not a one-time task but an ongoing effort.  Building champions in engineering will help you establish a sustainable, proactive data governance and AI compliance program.


Understand the Nature of Your AI Product's Functionality. 


AI regulations are evolving globally. Many AI regulations seek to regulate AI features that analyze sensitive personal data, or that have the ability to manipulate human behavior or decision making. For example, the EU AI Act, the world’s first comprehensive AI regulation, regulates AI based on the risk posed by the use case. It classifies AI based on four categories: Unacceptable Risk, High-Risk AI, Limited Risk AI and Minimal Risk AI.  The EU AI Act has an outright ban on AI tools that rank people based on their social or personal characteristics or tools that manipulate human behavior.  The EU Parliament too, has identified four areas of banned AI use:  Cognitive behavioral manipulation of people or specific vulnerable groups, classifying people based on behavior, socio-economic status or personal characteristics, biometric identification and categorization of people, real-time and remote biometric identification systems, such as facial recognition in public spaces.  Understanding where your company’s AI product falls within the EU’s paradigm of risk will be crucial as you navigate the legal landscape which will govern use of your product.  


While the US does not have a federal AI law, it is important to keep in mind that use of AI will need to be in compliance with existing state laws as well as federal laws that might apply to the industry in which your product is used.  For example the Equal Credit Opportunity Act prohibits creditors from discriminating against applicants on the basis of an applicant’s race, color, religion, age, marital status or gender.  It also requires that lenders provide a reason when denying credit to an applicant.  Although the ECOA does not explicitly legislate use of AI, a lender implementing AI in the process of deciding credit worthiness, must abide by the law.  Recent guidance from the Consumer Financial Protection Bureau (CFPB) has explicitly stated that use of AI in lending decisions will not exempt lenders from explaining why credit is denied.  For this reason, lenders using AI will need to explain the algorithms upon which their datasets are trained as well as the factors upon which denials are based.  


Similarly, Title VII of the Civil Rights Act prohibits discrimination in the hiring and firing process.  While use of AI is not explicitly addressed in Title VII, employers who make use of AI in their applicant tracking systems (ATS) will need to ensure that any algorithms utilized by the ATS does not discriminate against protected classes.  Some states such as New York, legislate use of AI in hiring and promotion decisions.  In 2023, New York passed Local Law 144 which requires employers to audit AI used in employment decisions for discrimination.  Employers in New York must also publish the results of this AI audit on their company website.  For this reason, understanding how your company’s AI product will be used and the downstream effects of your AI product’s outputs will be critical in ascertaining which laws will apply to your company’s development of the tool as well as what your customers will need to do to comply with any AI laws that govern their use of you company’s AI tool.  


Updating Customer-Facing Master Services Agreements for AI Tools: Data Usage Rights and Disclaimers.


Now that you have a clear understanding of the AI functionality in your new product, you should review and update your customer-facing services agreements (MSA's) to ensure alignment with how your new product will use your customer data. As AI technologies evolve, so too must the legal frameworks governing their use. Your MSA should explicitly define the scope of data usage rights granted to your company, ensuring that customers are fully informed about how their data will be processed, stored, and potentially leveraged to improve AI models. This is also a good time to do some data mapping with your engineering team.  Do you know what data is collected by your company and by which product? How do your products use this data? Does your company aggregate customer data? Does your company use customer data to train its LLM's?  These are all important questions that you'll need to understand in order to have the contractual and legal right to use customer data by way of your customer agreements.


Transparency in these agreements not only fosters trust but also mitigates the risk of disputes arising from misunderstandings about data ownership and usage.  In addition to securing the necessary data usage rights, this is an opportune moment to address the inherent limitations of AI tools. AI systems, while powerful, are not infallible.  They can produce inaccurate or misleading outputs, commonly referred to as "hallucinations." To protect your company from potential liability, your agreements should include a clear and conspicuous disclaimer regarding the accuracy of AI-generated results. Consider incorporating language that requires customers to acknowledge that AI tools may produce errors or inaccuracies, and that such outcomes do not constitute a product defect or breach of warranty. This disclaimer should be framed in a way that is both legally robust and accessible to customers, ensuring they understand the limitations of the technology while maintaining their confidence in your company’s commitment to quality and innovation.


By proactively updating your agreements to address data usage rights and AI output disclaimers, you not only safeguard your company’s legal interests but also demonstrate a commitment to ethical and responsible AI deployment. This approach positions your organization as a trusted partner, capable of navigating the complexities of AI while prioritizing transparency and customer protection.


Establishing a Cross-Functional AI Governance Committee. 


Once your product is released, ongoing collaboration with your development team, CISO and compliance will be important.  Consider creating a cross-functional AI governance committee to ensure that AI tools are developed, deployed, and monitored in ways that align with legal, ethical, and business objectives. To structure and implement such a committee effectively, begin by drafting a clear charter that defines its purpose, scope, and authority. The charter should articulate the committee’s mission—such as ensuring AI systems are compliant, ethical, and aligned with business goals—as well as its responsibilities, which may include overseeing risk assessments, compliance audits, policy development, and incident response. It’s also important to clarify the committee’s decision-making authority, particularly its role in approving or escalating AI-related decisions.


Membership in the committee should be diverse, drawing representatives from various functions to provide a well-rounded perspective. Establishing robust governance processes is another critical step. Regular meetings should be scheduled to review AI projects, compliance status, and emerging risks. Implementing risk assessment workflows, such as bias testing and data privacy reviews, ensures that new AI tools align with regulatory requirements. An incident response plan should also be developed to address AI-related issues, such as data breaches or model failures, including clear escalation paths and communication strategies. Training and awareness programs can further educate employees about AI ethics, compliance, and best practices.


Integration with existing governance structures is vital for consistency and accountability. The committee should report to senior leadership and the board, ensuring visibility into AI-related risks and opportunities. Aligning AI governance with broader enterprise risk management (ERM) frameworks helps maintain a cohesive approach across the organization.


Finally, ongoing monitoring and adaptation are necessary to keep the committee effective. Tracking performance metrics - such as the number of AI-related incidents, compliance audit results, and customer feedback - provides insight into the committee’s impact. By adopting robust compliance frameworks and establishing a cross-functional AI governance committee, organizations can proactively manage the risks and opportunities associated with AI. This structured approach not only supports regulatory compliance but also fosters innovation, builds customer trust, and positions the organization as a leader in responsible AI deployment. 



Copyright © 2025 Solversa Legal Solutions, P.L.L.C.

All Rights Reserved.

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept