Artificial Intelligence (AI) is changing many facets of our daily lives, and health care is no exception. Machine learning, predictive modeling and analytics, automated decision-making, and other AI applications are transforming the health care delivery system—and come with significant ethical and compliance risks. These risks are particularly critical in the health care industry, where a trusting relationship with patients, their families, and key stakeholders is paramount.

Regulators are increasingly mindful of potential misuse and discrimination associated with the use of AI tools. The Federal Trade Commission (FTC), for example, has offered five key principles to guide companies’ use of AI tools:

  • Be Transparent
  • Explain Your Decision to the Consumer
  • Ensure That Your Decisions Are Fair
  • Ensure That Your Data and Models Are Robust and Empirically Sound
  • Hold Yourself Accountable for Compliance, Ethics, Fairness, and Nondiscrimination

View the Latest Healthy Bytes and Subscribe

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.