The RAIES project is a project that emerged through the FAPERGS Public Notice, led by NAVI AI New Ventures and AIRES PUCRS. Its main objective is to make AI more ethical and safe for everyone who uses it. “Today many companies in the public and private sector use Artificial Intelligence (AI), our goal is to support these developers through this project that is interdisciplinary. We have students from the Escola Politécnica, the School of Law, the School of Medicine and the School of Humanities, which is where Nicholas and I work with ethics. This project, in addition to being interdisciplinary, is intergenerational because we have colleagues from different age groups”, explains researcher Nythamar Hilario Coordinator of the project.
Artificial Intelligence is increasingly present in our lives, tasks previously performed by us are now delegated to systems implemented with artificial intelligence, the so-called 4th Industrial Revolution is the culmination of the digital age. One of the great differences between the current moment of technological modernization and those that occurred in the past is that machines are progressively surpassing our cognitive capabilities in several areas.
Some examples of the use of AI by the Brazilian public authorities:
Caixa Econômica Federal uses AI to predict fraudulent electronic transactions;
The Federal Supreme Court uses AI to categorize legal proceedings under general repercussion;
The Federal Police Department uses AI for facial recognition and natural language models for risk accuracy (e.g., fraud detection).
Like all technology, AI can be misused, or even dangerously. There are many cases where the incorrect use of AI has caused serious damage to consumers and the reputation of companies. In this way, ethical issues, risk prospection and security measures are factors that cannot be forgotten in the development of this type of technology.
Facial recognition systems may have racist biases (Lohr, 2018; Nunes, 2019);
NLP (Natural Language Processing) systems can have sexist and misogynistic biases (Wolf et al., 2017; Balch, 2020);
Classification systems can discriminate against members of the LGBTQ+ community (Wang & Kosinski, 2017; Agüera y Arcas et al., 2018).
So the project was born with the intention of encouraging the creation and formalization of a new agent to operate within organizations and companies focused on the development of technologies and solutions that use such types of systems. They are responsible for preventing and mitigating the possible side effects associated with the use of AI.
Main goal: Develop foundational techniques and analysis for analysis through developers and companies that produce applications of intelligent systems (AI) to institute policies that promote the development of Ethical and Safe AI. Specification of objectives:
Structuring an interdisciplinary discussion group on the project theme (Ethical and Safe AI);
Perform applied to evaluate the influence of ethical research and moral dilemmas on companies and professionals in the technology field.
Write a best practices manual that a series of tools that help developers of a concept of models and insurance systems;
Develop methodologies and computational tools that help companies and developers to implement responsible AI models.
Produce efficient study methods that are valid for creating ethical AI.
Test methodologies and computational tools with companies and startups focused on developing AI-powered applications;
Refine our methodology based on the feedback and experience we will have from our case studies (ie, applications of the methodology in different companies/sectors);
Publish our methodology/manual and assist in training HR experts in AI ethics and security;
Holding of two international conferences on Ethics and AI Safety, with the aim of making the state of RS a reference in the area..
Support government bodies and civil society in the discussion and regularization of AI in the country.
Check out the full text:
留言