With the growing hype around artificial intelligence and its recent elevation among policymakers at the national level, artificial intelligence continues to be a hot button issue filled with promises of positive change and challenges.
Similar to the ever-changing tides of tech advancements is the change in the way we discuss the future of health care — specifically, the future of “precision medicine.” At the forefront of this enhanced delivery of care are AI and machine learning. Most have heard of AI in the context of autonomous vehicles, but what does it mean for health care? When integrating an AI platform into any health care environment, it’s crucial to understand the importance of balancing ethics with efficiency. If AI will benefit patients, we must first learn how to protect their data before we are able to reap the benefits of what a successful AI platform can truly provide.
Law firms can learn from one another and other organizations in the health care and tech realms about how to best serve their clients’ needs that are looking to integrate an AI platform or are already operating with one. In reality, most of these organizations are most likely already using AI in some shape or form. This means it’s even more crucial to understand the requirements of both man and machine (respectively, and when working together) to create a meaningful impact.
Here are a few examples of how AI is being used in health care, and what to be mindful of if you’re representing AI-involved organizations:
Helping, not replacing people
While computers can far outpace humans at processing and analyzing big data, the human touch is still essential for the overarching assessment and to achieve larger goals. Remaining educated about the line between AI and the people using it is an integral part of corporate responsibility in the AI space, especially for legal professionals ensuring their clients aren’t putting themselves at risk from misuse.
Contrary to the vision of seeing something akin to “The Terminator” for your annual physical, AI is actually extremely beneficial as a way to enhance the capabilities of the user, whether it be a clinician seeking a diagnosis, or a researcher exploring the traits of an unknown disease. There’s no question that AI can process and analyze information at a rate far beyond any human capacity, but human intellect still remains a key component—not just in further training the algorithm or interpreting the information that’s presented, but in making the connections as how to best use that information in the future.
With AI in the hospital, it’s not only leading to quicker and more accurate diagnoses, but changing the way clinicians think, leading them to different ways of problem-solving and bringing to their attention a possible diagnosis they may have never traditionally considered.
More broadly speaking, if the health care industry is to play a major leadership role in the development and application of AI, it’s imperative to demonstrate corporate responsibility and invest in education.
All about the data
Sharing data and protecting privacy are in conflict. It’s crucial that we share data across a large network of partners — clinicians, labs, researchers, and academia. With sharing, there’s always the concern of patient privacy and meeting ethical standards, but we will never attain the big data that allows for the creation of robust AI solutions without it.
In addition to educating users and consumers about the limits and ethical concerns of AI, companies must consider how they can influence society. Will AI be only a tool for the wealthy, the white, and the west? Or can AI level the playing field?
It’s widely accepted that AI is only as good as the data it’s trained on. Looking at data inputs and resulting analysis, it’s easy to see how the “garbage in, garbage out” premise applies to inequalities, in this case “disparities in, disparities out”. (It’s hard to forget the stories of AI bots or algorithms becoming racist or disturbingly morbid after learning from the masses on social media or internet chat forums.)
From the standpoint of using facial analysis — technologies that can derive possible genetic syndromes from a facial scan — to detect rare and difficult to diagnose disease, when trained on multiple population data sets, the result is a decrease in bias because the algorithm has been trained to recognize the many differences and intricacies that separate the physical manifestation of various ethnicities.
In that same vein, it can be difficult for some organizations to get their hands on the amount of data that would be required to achieve that level of ethnic equitability, which then begs the question of privacy and security, and the legal obligations that ensue.
Fortunately, a byproduct for many health care organizations that are dealing with AI is the ability to de-identify the information, such as with patient photos, so it’s possible to share the information gathered from the data without actually sharing the data itself.
Replacing hype with hope
AI is already a prominent tool in the arsenal of applications tech and health care companies have at their disposal today, and this trend will only continue to grow. As with any new venture, the best way to understand and adapt to AI is to embrace it, whether it’s within your own organization or your client’s.
However, there is just as great a potential for unfavorable outcomes as there is for positive. With the use of AI and the relative ease of access to facial photos, abuse on the part of health insurance companies or employers is possible. Fortunately, with new FDA regulations emerging and legislation such as The Genetic Information Nondiscrimination Act of 2008, it’s easier to prevent this type of discrimination.
Another area in health care where AI can enable positive outcomes is the use of electronic health records (EHRs). Extracting and analyzing information from patient data has always been a crucial crux of health care, but doing so in a safe, secure, and timely manner has continued to be a challenge. There is a wealth of data stored in EHRs—to be able to share that data in a meaningful way among health care providers, while still monitoring quality and integrity concerns, is a major hurdle that organizations are hoping to overcome with the use of AI.
Successfully extracting this type of data through the use of AI would result in predictive analytics and decision support, and would mean that clinicians, researchers, and other health care professionals would have the ability to use that information in ways that are less siloed and lead to better outcomes for the patient.
Dekel Gelbman is CEO of FDNA.