Artificial intelligence (AI) systems have become more and more prevalent in everyday life and especially in enterprise settings.
These systems have grown increasingly more accurate and efficient, but at the same time more complex and less understandable. Broad adoption of AI systems requires humans to trust them. This depends on the ability to ensure that AI systems are fair, robust, explainable, accountable, respectful of the privacy of individuals and will cause no harm. To this end, many tools and techniques have been developed for both assessing AI models and mitigating any potential risks they may pose.
This chapter surveys the existing approaches and technologies available to tackle each of the dimensions of Trustworthy AI to create more ethical AI systems. Moreover, it touches on the challenges and possible solutions to significantly combining various aspects of these dimensions, indicating areas for further research.
You can read the full scientific work, supported by the iToBoS project, in the book "Ethics in Online AI-Based Systems, Risks and Opportunities in Current Technological Trends", here.