4 minute read

Journey of AI in Healthcare Ecosystem

The use of AI in healthcare has been quite the controversial topic in the past mostly questioning the ethics of its use, the data it uses and the sense of lack of punishment for the decisions it makes.

During this pandemic, it has again risen as a talking point. while some countries have been able to leverage AI to assist in the treatment, research and even contact tracing of infected individuals there have been those opposed to the extent of its use.

Advertisement

To start of AI has been effective in determining what would be required for the vaccine, a process if it was not left to AI we would surely still be trying to figure out what are some of the likely components to work in the creation of a vaccine.

AI Cannot Be Understated

The use of AI has also been essential in contact tracing which is where a lot of concern has come in regarding the collection and use of personal identifiable information in some cases without even consent by the individuals.

The benefits of the use of AI cannot be understated despite the challenges facing it regarding data collection, data use and even treatment. In most areas where data protection laws are in place the law requires that consent be sort out (exemptions may exist to this such as war, national crisis or public interest).

While developing AI for the healthcare system it is important to consider how the training data is licensed. Research data is often released and with this need, such data (training and/ or research) requires to be licensed prior to the release.

There are various forms of licensing. In this case, however, they all share some key elements such as an arbitration requirement. Usually, a copyleft requirement and/or intent of non-commercial (unless required to be commercial then the licensor must be paid) parties involved and the domain of the data used (public or private data) will determine how a license is drawn up as well as the desire to commercialize at a point or not to commercialize examples of licenses used include:

¡ Prepared licenses

¡ Bespoke licenses

¡ Open/Non-Commercial Government Licence

¡ Multiple licensing

¡ Public domain licensing

The licensing also determines how the data can and can’t be used and what data sets are not allowed in various licenses.

The value of using AI in healthcare also moves beyond the research being done to find a vaccine and possible treatment a hidden benefit of the use of AI is the extent to which it can free up healthcare providers to focus on treatment. In this instance AI will be left responsible for diagnosis of patients and with the aid of robotics even help in patient testing which would significantly reduce the chances of infection spreading to the healthcare providers.

Leveraged to improve

According to reports, AI has a far more accurate diagnostics record compared to human doctors and this should be leveraged to improve the time taken from taking samples for testing to the completion of the diagnosis, time that could save many lives.

When AI was used to determine the 70 likely compounds to be essential in the making of a vaccine for corona virus. This was a huge milestone and AI again continues to leveraged to help run simulations into possible vaccines, this of course means that a vaccine would be ready in a shorter time as compared to when not using AI. In treatment AI may not yet play a direct role while treating patients of COVID-19 but it has established the basis for the hunt of the vaccine.

AI continues to improve on itself in order to make better decisions, but it can get some decisions wrong and that worries people the most. How to do punish or enforce penalties on an entity that is not living? Do you go after the creator of the AI, the people responsible for the data it used and/ or the AI itself?

Code was Flawed

This challenge is simply put as one of punishment and philosophy while trying to determine the repercussion of wrong actions by AI. Another challenge is that the algorithms have been improving on themselves over time and this means its no longer what was written by the programmer. However, one could argue that the code was flawed from the beginning. Ownership of the intellectual property rights to algorithms is one way to determine who should be held responsible should AI make a decision that leads to harm. The rights can be owned by either party involved in the development and use of the AI or a combination of the various stakeholders.

An agreement can be sort between the provider of the Algorithms to own a portion of the IP and another portion owned by the party that provides the knowledge base used to teach the AI such as a medical institute, the ownership could be risk oriented (sharing the risks involved in any wrong done be the AI). The ownership can also be commercially oriented (should the party or parties with ownership wishing to commercialize further the IP).

Regarding the ethics around punishment for the decisions of AI one way to remove this wall is to have humans review the decisions made by AI and either reject or approve them at this stage we are welcomed into a more familiar space of punishment, should something go wrong then there is a person to held accountable.

Experimentation

The lack of regulation or soft regulation in this space can be viewed as double edged sword one that allows for research to thrive freely and experimentation into the use of AI to be taken to levels closer to replacing certain aspects of healthcare handled by humans. The other edge of this sword is the improper use of data for research, building the knowledge base of the AI, the lack of proper consequences or lack of knowing where the consequences should be laid out.

We may never have a world that has trust in AI in healthcare to the extent of providing treatment to people despite the likelihood that this may improve treatment as it has improved diagnosis. This could be a result of the lack of the human touch we have grown accustomed to during treatment. It could be the lack of punishment for wrong decisions made by AI. AI may have the potential to vastly improve healthcare but what it has failed to do is to help shape the philosophy of man strongly enough to see that in forcing AI to play in our play ground is chaining its potential to a short leash.

This article is from: