HealthTech Magazines

Insight View of Healthcare IT Professionals

Future of Ethical AI: The best way to predict your future is to create it.

Sophia Greulich

Even if you don't realize it, you interact with artificial intelligence every day. For example by communicating directly with a voice assistant, in social media, or hidden in a process. We expect it to make us more efficient, faster and more informed - it should make our lives easier and give us more time for the important things. But so much for the ideal world.


AI can be very efficient and augment people in their work. We are good at putting AI to work. But are we putting good AI to work? 


If AI can help doctors make more informed decisions and be on top of the latest research, that is great. If AI can help companies secure data or predict future outcomes, that is cool and helpful. If AI can help police identify you at any point in time and knows everything you searched, wrote about and what your secrets are, that is .... wait a second. 


With AI, as with any other tool, it depends on what purpose, in what context, in what way, and with what impact it is used. Only then can you weigh whether it is also a good use. We will come to a better understanding of what is good.


 

If AI leads to biased decisions, for example, women getting worse credit scores, minorities being disadvantaged in security, or people not being recognized immediately by sensor technology, then it is not good (tech). 


This is exactly why we need to talk about Ethics and AI. 


The basic question in ethics theory is: "What is the right thing to do?"


When we talk about ethics in the context of AI, we want to define how AI can do the right thing and live up to the values of our society. It is more than just a philosophical question, it is a socio-technological challenge and a business imperative to be solved. But why is it necessary to do that? 


The details: 


Social background of AI: The relevance of AI and the number of critical use cases are increasing dramatically. Be it in the social sector, in the health sector, or in the government sector. AI is gaining more and more power over all areas of our lives and our future. Precisely because of this power, we must design AI to fit into our value system. 


Technical background of AI: AI is subject to technical characteristics that make ethical requirements necessary. We all know the term "black box" for AI: it is primarily incomprehensible to us humans why an AI makes a certain decision. But if we want to rely on AI-based decisions in the future, we as humans have to be able to understand it. The other challenge is related to data. Based on the training data of an AI solution, a so-called bias can namely occur, which could lead to the solution working worse for groups that are already subject to discrimination. 


The good news is that these ethical challenges have been recognized and are being addressed by many players in the AI field. In the following, we will look at the relevant aspects of developing ethical AI. 


1. Fairness: The development and use of AI systems must be equitable, non-discriminatory, and ethical.

2. Responsibility: In AI systems, there must be the possibility to ensure and clearly assign responsibility and liability.

3. Benefit for society: AI systems must be used for the benefit of society while respecting societal values and human rights.

4. Data privacy: AI systems must respect users' privacy and data rights. 

5. Interdisciplinary cooperation and collaboration: The ethical aspects of AI must be researched and shaped jointly and inter-disciplinarily.

6. Transparency: AI systems must enable transparency for users.

7. Technical Robustness: AI systems must be designed and implemented to be technically robust and secure.

8. Sustainable human-machine cooperation: AI systems must be deployed in a way that promotes sustainable human-machine cooperation.


While some major IT players are struggling mightily with the intricacies of AI, others have focused on challenging potentially critical use cases and establishing methods, processes, and tools for trustworthy AI development. Examples include governance models such as AI Ethics Boards, tools such as AIEthics360 or OpenScale, methods such as Enterprise Design Thinking for AI, or the publication of principles and guides. 


We have the chance to shape AI and its applications. Now is the time to determine the future of AI and thus our society.


We can only do this by engaging in discourse - with all players in the field of AI. This concerns politics, society, tech, users, scientists, industry, and we want to help start this discourse. 

Latest Posts

-->