Transcarent’s role in establishing policy recommendations for deploying AI safely and responsibly across industries

As AI capabilities rapidly evolve, so too do its application in healthcare. Not all uses of AI pose a safety risk; those that do should be regulated.
In the healthcare industry, HIPAA already provides guardrails for how patients’ health information is used. We recognize, though, that there are some gaps created by AI’s new integration into healthcare and we support policymakers applying new regulations to those areas.
At Transcarent, we’re excited to help lead this effort as a proud member of the Data & Trust Alliance, an organization that meaningfully convenes organizations ranging from startups to industry giants, representing voices including consumer advocacy organizations, small businesses alongside international conglomerates. By bringing diverse perspectives together, the group can produce resources, like the Policy Recommendations for the Responsible Use of AI, which can be malleable to meet the needs of organizations big and small.
Existing sector-specific regulatory authorities - First do no harm
The Data & Trust Alliance created the policy recommendations to serve as a framework intended to start the conversation around how we can leverage the advantages of AI without introducing new harms. This framework will help establish guardrails for how to deploy AI safely and responsibly and to accommodate the different approaches to adoption depending on the industry of each organization or use case for application.
It's important not to overlook the existing rules and regulations in the various sectors that are already shaping how and when AI is deployed. In healthcare, how information is used or disclosed is generally regulated under HIPAA. When it comes to the application of AI in healthcare, a lot of questions are answered by being compliant with HIPAA. But not all questions.
Additionally, Transcarent, like many organizations, is applying NIST’s AI Risk Management Framework as part of our company AI policy. We don’t need to completely reinvent the wheel.
Regulate AI risk, not AI algorithms
Our goal is to foster a conversation about how to leverage AI’s advantages while mitigating potential harms. Different organizations will have varied approaches to AI adoption based on their maturity and industry context, and it's important to recognize that each AI application carries its own level of risk. By regulating high-risk uses of AI rather than the algorithms themselves, companies can move quickly and innovate where appropriate, while applying necessary caution in high-stakes situations.
The critical challenge for both the industry and policymakers is defining what constitutes “high-risk” AI use. This determination will shape the regulatory landscape and ensure that AI deployment maximizes benefits while minimizing risks.
Transcarent is committed to working with the Data & Trust Alliance to build upon the foundation of the policy framework, creating a responsible environment for AI innovation that prioritizes safety and security. We are excited to continue developing resources that are practical and effective for a wide range of organizations, from emerging companies like ours to the established pillars of our economy.
By fostering collaboration and responsible innovation, we can harness the transformative power of AI to improve healthcare and health outcomes while safeguarding patient well-being.